Law of the unconscious statistician
In probability theory and statistics, the law of the unconscious statistician (LOTUS) is a theorem used to calculate the expected value of a function g(X) of a random variable X when one knows the probability distribution of X but one does not know the distribution of g(X). The form of the law can depend on the form in which one states the probability distribution of the random variable X. If it is a discrete distribution and one knows its probability mass function ƒX (but not ƒg(X)), then the expected value of g(X) is
where the sum is over all possible values x of X. If it is a continuous distribution and one knows its probability density function ƒX (but not ƒg(X)), then the expected value of g(X) is
If one knows the cumulative probability distribution function FX (but not Fg(X)), then the expected value of g(X) is given by a Riemann–Stieltjes integral
Etymology
This proposition is known as the law of the unconscious statistician because of a purported tendency to use the identity without realizing that it must be treated as the result of a rigorously proved theorem, not merely a definition.[4]
Joint distributions
A similar property holds for joint distributions. For discrete random variables X and Y, a function of two variables g, and joint probability mass function f(x, y):[5]
In the continuous case, with f(x, y) being the joint probability density function,
Proof
This law is not a trivial result of definitions as it might at first appear, but rather must be proved.[5][6][7]
Continuous case
For a continuous random variable X, let Y = g(X), and suppose that g is differentiable and that its inverse g−1 is monotonic. By the formula for inverse functions and differentiation,
Because x = g−1(y),
So that by a change of variables,
Now, notice that because the cumulative distribution function , substituting in the value of g(X), taking the inverse of both sides, and rearranging yields . Then, by the chain rule,
Combining these expressions, we find
By the definition of expected value,
Discrete case
Let . Then begin with the definition of expected value.
From measure theory
A technically complete derivation of the result is available using arguments in measure theory, in which the probability space of a transformed random variable g(X) is related to that of the original random variable X. The steps here involve defining a pushforward measure for the transformed space, and the result is then an example of a change of variables formula.[5]
We say has a density if is absolutely continuous with respect to the Lebesgue measure . In that case
where is the density (see Radon-Nikodym derivative). So the above can be rewritten as the more familiar
References
- Eric Key (1998) Lecture 6: Random variables Archived 2009-02-15 at the Wayback Machine, Lecture notes, University of Leeds
- Bengt Ringner (2009) "Law of the unconscious statistician", unpublished note, Centre for Mathematical Sciences, Lund University
- Blitzstein, Joseph K.; Hwang, Jessica (2014). Introduction to Probability (1st ed.). Chapman and Hall. p. 156.
- DeGroot, Morris; Schervish, Mark (2014). Probability and Statistics (4th ed.). Pearson Education Limited. p. 213.
- Ross, Sheldon M. (2010). Introduction to Probability Models (10th ed.). Elsevier, Inc.
- Virtual Laboratories in Probability and Statistics, Sect. 3.1 "Expected Value: Definition and Properties", item "Basic Results: Change of Variables Theorem".
- Rumbos, Adolfo J. (2008). "Probability lecture notes" (PDF). Retrieved 6 November 2018.