Uncertainty

From Tournesol
Jump to navigation Jump to search

In Bayesianism, uncertainty is a property assigned to an event by an observer who is not certain about the event.

As a result, the uncertainty of an event is subjective. It depends on the observer. More precisely, it depends on the observer's prior and on the data accessible to the prior.

However, in a special edition of The American Statistician on the topic, WassersteinSL-19 stresses that uncertainty is a feature, not a bug. In fact, the main statistical advice given by their editorial is to "accept uncertainty".

Representing uncertainty

Probability distributions are the most common and rigorous approach to represent uncertainty. Bayesianism imposes rules on how such probability distributions should be manipulated. In particular, it argues that, upon new observed data, such probability distributions should be updated by Bayes rule.

However, probability distributions are often too cumbersome to be manipulated. In practice, instead of manipulating complex probability distributions, statisticians often restrict themselves to the analysis of numbers that quantify uncertainty, like the variance or the standard deviation. This is the approach used by Tournesol to measure score uncertainty.

Another solution to represent uncertainty is to use credence intervals. A 95% credence interval for an unknown quantity is an interval such that the subjective credence that the unknown quantity lies within the interval is 95%.

Finally, a last common solution is the use of samples. Intuitively, this corresponds to listing credible scenarios. Samples are particularly used by approximate Bayesian algorithms called Monte Carlo methods. Popular techniques include generative adversarial networks (GANs) and Markov-Chain Monte Carlo (MCMC) methods.

Decision under uncertainty

The most common solution for decision under uncertainty is counterfactual optimization, which consists of comparing expected values conditioned on different decisions. Counterfactual optimization is well justified by the von Neumann - Morgenstern theorem. This approach has been formalized for reinforcement learning with a Solomonoff prior by Hutter-2000, and has led to the development of AIXI. Hoang-20 argues, however, that counterfactual optimization may become nonsensical in Newcomb-like paradoxes.

Decision under uncertainty in practical scenarios is widely discussed in Duke-18 Duke-20.