Everipedia Logo
Everipedia is now IQ.wiki - Join the IQ Brainlist and our Discord for early access to editing on the new platform and to participate in the beta testing.
Expected value

Expected value

In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the same experiment it represents. For example, the expected value in rolling a six-sided die is 3.5, because the average of all the numbers that come up is 3.5 as the number of rolls approaches infinity (see § Examples for details). In other words, the law of large numbers states that the arithmetic mean of the values almost surely converges to the expected value as the number of repetitions approaches infinity. The expected value is also known as the expectation, mathematical expectation, EV, average, mean value, mean, or first moment.

More practically, the expected value of a discrete random variable is the probability-weighted average of all possible values. In other words, each possible value the random variable can assume is multiplied by its probability of occurring, and the resulting products are summed to produce the expected value. The same principle applies to an absolutely continuous random variable, except that an integral of the variable with respect to its probability density replaces the sum. The formal definition subsumes both of these and also works for distributions which are neither discrete nor absolutely continuous; the expected value of a random variable is the integral of the random variable with respect to its probability measure.[1][2]

The expected value does not exist for random variables having some distributions with large "tails", such as the Cauchy distribution.[3] For random variables such as these, the long-tails of the distribution prevent the sum or integral from converging.

The expected value is a key aspect of how one characterizes a probability distribution; it is one type of location parameter. By contrast, the variance is a measure of dispersion of the possible values of the random variable around the expected value. The variance itself is defined in terms of two expectations: it is the expected value of the squared deviation of the variable's value from the variable's expected value (var(X) = E[(X – E[X])2] = E(X2) – [E(X)]2).

The expected value plays important roles in a variety of contexts. In regression analysis, one desires a formula in terms of observed data that will give a "good" estimate of the parameter giving the effect of some explanatory variable upon a dependent variable. The formula will give different estimates using different samples of data, so the estimate it gives is itself a random variable. A formula is typically considered good in this context if it is an unbiased estimator—that is if the expected value of the estimate (the average value it would give over an arbitrarily large number of separate samples) can be shown to equal the true value of the desired parameter.

In decision theory, and in particular in choice under uncertainty, an agent is described as making an optimal choice in the context of incomplete information. For risk neutral agents, the choice involves using the expected values of uncertain quantities, while for risk averse agents it involves maximizing the expected value of some objective function such as a von Neumann–Morgenstern utility function. One example of using expected value in reaching optimal decisions is the Gordon–Loeb model of information security investment. According to the model, one can conclude that the amount a firm spends to protect information should generally be only a small fraction of the expected loss (i.e., the expected value of the loss resulting from a cyber or information security breach).[4]

Definition

Finite case

Letbe a random variable with a finite number of finite outcomesoccurring with probabilitiesrespectively. The expectation ofis defined as
Since all probabilitiesadd up to 1 (), the expected value is theweighted average, with’s being the weights.
If all outcomesareequiprobable(that is,), then the weighted average turns into the simpleaverage. This is intuitive: the expected value of a random variable is the average of all values it can take; thus the expected value is what one expects to happen on average. If the outcomesare not equiprobable, then the simple average must be replaced with the weighted average, which takes into account the fact that some outcomes are more likely than the others. The intuition however remains the same: the expected value ofis what one expects to happen on average.

Examples

  • Let represent the outcome of a roll of a fair six-sided die. More specifically, will be the number of pips showing on the top face of the die after the toss. The possible values for are 1, 2, 3, 4, 5, and 6, all of which are equally likely with a probability of 1/6. The expectation of is

If one rolls thedietimes and computes the average (arithmetic mean) of the results, then asgrows, the average willalmost surelyconvergeto the expected value, a fact known as thestrong law of large numbers. One example sequence of ten rolls of thedieis 2, 3, 1, 2, 5, 6, 2, 2, 2, 6, which has the average of 3.1, with the distance of 0.4 from the expected value of 3.5. The convergence is relatively slow: the probability that the average falls within the range3.5 ± 0.1is 21.6% for ten rolls, 46.1% for a hundred rolls and 93.7% for a thousand rolls. See the figure for an illustration of the averages of longer sequences of rolls of thedieand how they converge to the expected value of 3.5. More generally, the rate of convergence can be roughly quantified by e.g.Chebyshev's inequalityand theBerry–Esseen theorem.
  • The roulette game consists of a small ball and a wheel with 38 numbered pockets around the edge. As the wheel is spun, the ball bounces around randomly until it settles down in one of the pockets. Suppose random variable represents the (monetary) outcome of a $1 bet on a single number ("straight up" bet). If the bet wins (which happens with probability 1/38 in American roulette), the payoff is $35; otherwise the player loses the bet. The expected profit from such a bet will be

That is, the bet of $1 stands to lose $0.0526, so its expected value is -$0.0526.

Countably infinite case

Letbe a random variable with a countable set of outcomesoccurring with probabilitiesrespectively, such that theinfinite sumconverges. The expected value ofis defined as the series
Remark 1. Observe that

Remark 2. Due to absolute convergence, the expected value does not depend on the order in which the outcomes are presented. By contrast, a conditionally convergent series can be made to converge or diverge arbitrarily, via the Riemann rearrangement theorem.

Example

  • Suppose and for , where (with being the natural logarithm) is the scale factor such that the probabilities sum to 1. Then

Since this series converges absolutely, the expected value ofis.
  • For an example that is not absolutely convergent, suppose random variable takes values 1, −2, 3, −4, ..., with respective probabilities , ..., where is a normalizing constant that ensures the probabilities sum up to one. Then the infinite sum

converges and its sum is equal to. However it would be incorrect to claim that the expected value ofis equal to this number—in factdoes not exist (finite or infinite), as this series does not converge absolutely (seeAlternating harmonic series).
  • An example that diverges arises in the context of the St. Petersburg paradox. Let and for . The expected value calculation gives

Since this does not converge but instead keeps growing, the expected value is infinite.

Absolutely continuous case

Ifis a random variable whosecumulative distribution functionadmits adensity, then the expected value is defined as the following Lebesgue integral:
Remark. From computational perspective, the integral in the definition ofmay often be treated as animproper Riemann integralSpecifically, if the functionis Riemann-integrable on every finite interval, and

then the values (whether finite or infinite) of both integrals agree.

General case

In general, ifis arandom variabledefined on aprobability space, then the expected value of, denoted by,, or, is defined as theLebesgue integral
Remark 1. Ifand, thenThe functionsandcan be shown to be measurable (hence, random variables), and, by definition of Lebesgue integral,
whereandare non-negative and possibly infinite.

The following scenarios are possible:

  • is finite, i.e.

  • is infinite, i.e. and

  • is neither finite nor infinite, i.e.

Remark 2. Ifis thecumulative distribution functionof, then

where the integral is interpreted in the sense of Lebesgue–Stieltjes.

Remark 3. An example of a distribution for which there is no expected value is Cauchy distribution.

Remark 4. For multidimensional random variables, their expected value is defined per component, i.e.

and, for a random matrixwith elements,

Basic properties

The properties below replicate or follow immediately from those of Lebesgue integral.

Ifis anevent, thenwhereis the indicator function of the set.
Proof. By definition of Lebesgue integral of the simple function,

If X = Y (a.s.) then E[X] = E[Y]

The statement follows from the definition of Lebesgue integral ((a.s.),(a.s.)), and that changing a simple random variable on a set of probability zero does not alter the expected value.

Expected value of a constant

Ifis a random variable, and(a.s.), where, then. In particular, for an arbitrary random variable,.

Linearity

The expected value operator (or expectation operator)islinearin the sense that
whereandare arbitrary random variables, andis a constant.
More rigorously, letandbe random variables whose expected values are defined (different from).
  • If is also defined (i.e. differs from ), then

  • Let be finite, and be a finite scalar. Then

E[X] exists and is finite if and only if E[|X|] is finite

The following statements regarding a random variableare equivalent:
  • exists and is finite.

  • Both and are finite.

  • is finite.

Sketch of proof. Indeed,. By linearity,. The above equivalency relies on the definition of Lebesgue integral and measurability of.
Remark. For the reasons above, the expressions "is integrable" and "the expected value ofis finite" are used interchangeably when speaking of a random variable throughout this article.

If X ≥ 0 (a.s.) then E[X] ≥ 0

Monotonicity

If(a.s.), and bothandexist, then.
Remark.andexist in the sense thatand
Proof follows from the linearity and the previous property for, since(a.s.).

If (a.s.) and is finite then so is

Letandbe random variables such that(a.s.) and. Then.
Proof. Due to non-negativity of,exists, finite or infinite. By monotonicity,, sois finite which, as we saw earlier, is equivalent tobeing finite.

If and then

The proposition below will be used to prove the extremal property oflater on.
Proposition. Ifis a random variable, then so is, for every. If, in addition,and, then.

Counterexample for infinite measure

The requirement thatis essential. By way of counterexample, consider the measurable space
whereis the Borel-algebra on the intervalandis the linear Lebesgue measure. The reader can prove thateven though(Sketch of proof:anddefine a measureonUse "continuity from below" w.r. toand reduce to Riemann integral on each finite subinterval).

Extremal property

Recall, as we proved early on, that ifis a random variable, then so is.
**Proposition (extremal property of).** Letbe a random variable, and. Thenandare finite, andis the best least squares approximation foramong constants. Specifically,
  • for every ,

  • equality holds if and only if

(denotes thevarianceof).
Remark (intuitive interpretation of extremal property). In intuitive terms, the extremal property says that if one is asked to predict theoutcomeof a trial of a random variable, then, in some practically useful sense, is one's best bet if no advance information about the outcome is available. If, on the other hand, one does have some advance knowledgeregarding the outcome, then — again, in some practically useful sense — one's bet may be improved upon by usingconditional expectations(of whichis a special case) rather than.
Proof of proposition. By the above properties, bothandare finite, and

whence the extremal property follows.

Non-degeneracy

If, then(a.s.).

If then (a.s.)

Corollary: if then (a.s.)

Corollary: if then (a.s.)

For an arbitrary random variable,.

Proof. By definition of Lebesgue integral,

This result can also be proved based on Jensen's inequality.

Non-multiplicativity

In general, the expected value operator is not multiplicative, i.e.is not necessarily equal to. Indeed, letassume the values of 1 and -1 with probability 0.5 each. Then

and

The amount by which the multiplicativity fails is called the covariance:

However, ifandareindependent, then, and.

Counterexample: despite pointwise

Letbe the probability space, whereis the Borel-algebra onandthe linear Lebesgue measure. Fordefine a sequence of random variables

and a random variable

on, withbeing the indicator function of the set.
For everyasand
soOn the other hand,and hence

Countable non-additivity

In general, the expected value operator is not-additive, i.e.
By way of counterexample, letbe the probability space, whereis the Borel-algebra onandthe linear Lebesgue measure. Define a sequence of random variableson, withbeing the indicator function of the set. For the pointwise sums, we have

By finite additivity,

On the other hand,and hence

Countable additivity for non-negative random variables

Letbe non-negative random variables. It follows frommonotone convergence theoremthat

Inequalities

Cauchy–Bunyakovsky–Schwarz inequality

The Cauchy–Bunyakovsky–Schwarz inequality states that

Markov's inequality

For a nonnegative random variableand, Markov's inequality states that

Bienaymé-Chebyshev inequality

Letbe an arbitrary random variable with finite expected valueand finitevariance. The Bienaymé-Chebyshev inequality states that, for any real number,

Jensen's inequality

Letbe aBorelconvex functionanda random variable such that. Jensen's inequality states that
Remark 1. The expected valueis well-defined even ifis allowed to assume infinite values. Indeed,implies that(a.s.), so the random variableis defined almost sure, and therefore there is enough information to compute
Remark 2. Jensen's inequality implies thatsince the absolute value function is convex.

Lyapunov's inequality

Let. Lyapunov's inequality states that
Proof. ApplyingJensen's inequalitytoand, obtain. Taking theth root of each side completes the proof.

Corollary.

Hölder's inequality

Letandsatisfy,, and. The Hölder's inequality states that

Minkowski inequality

Letbe an integer satisfying. Let, in addition,and. Then, according to the Minkowski inequality,and

Taking limits under the sign

Monotone convergence theorem

Let the sequence of random variablesand the random variablesandbe defined on the same probability spaceSuppose that
  • all the expected values and are defined (differ from );

  • for every

  • is the pointwise limit of (a.s.), i.e. (a.s.).

The monotone convergence theorem states that

Fatou's lemma

Let the sequence of random variablesand the random variablebe defined on the same probability spaceSuppose that
  • all the expected values and are defined (differ from );

  • (a.s.), for every

Fatou's lemma states that

(is a random variable, for everyby the properties of limit inferior).

Corollary. Let

  • pointwise (a.s.);

  • for some constant (independent from );

  • (a.s.), for every

Then
Proof is by observing that(a.s.) and applying Fatou's lemma.

Dominated convergence theorem

Letbe a sequence of random variables. Ifpointwise(a.s.),(a.s.), and. Then, according to the dominated convergence theorem,
  • the function is measurable (hence a random variable);

  • ;

  • all the expected values and are defined (do not have the form );

  • (both sides may be infinite);

Uniform integrability

In some cases, the equalityholds when the sequenceis uniformly integrable.

Relationship with characteristic function

The probability density functionof a scalar random variableis related to itscharacteristic functionby the inversion formula:
For the expected value of(whereis a Borel function), we can use this inversion formula to obtain
Ifis finite, changing the order of integration, we get, in accordance withFubini–Tonelli theorem,

where

is the Fourier transform ofThe expression foralso follows directly fromPlancherel theorem.

Uses and applications

The mass of probability distribution is balanced at the expected value, here a Beta(α,β) distribution with expected value α/(α+β).

The mass of probability distribution is balanced at the expected value, here a Beta(α,β) distribution with expected value α/(α+β).

It is possible to construct an expected value equal to the probability of an event by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by frequencies.

The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X − E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions.

To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller.

This property is often exploited in a wide variety of applications, including general problems ofstatistical estimationandmachine learning, to estimate (probabilistic) quantities of interest viaMonte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g., whereis the indicator function of the set.

In classical mechanics, the center of mass is an analogous concept to expectation. For example, suppose X is a discrete random variable with values xi and corresponding probabilities pi. Now consider a weightless rod on which are placed weights, at locations xi along the rod and having masses pi (whose sum is one). The point at which the rod balances is E[X].

Expected values can also be used to compute the variance, by means of the computational formula for the variance

A very important application of the expectation value is in the field ofquantum mechanics. The expectation value of a quantum mechanical operatoroperating on aquantum statevectoris written as. Theuncertaintyincan be calculated using the formula.

The law of the unconscious statistician

The expected value of a measurable function of,, given thathas a probability density function, is given by theinner productofand:
This formula also holds in multidimensional case, whenis a function of several random variables, andis theirjoint density.[5][6]

Alternative formula for expected value

Formula for non-negative random variables

Finite and countably infinite case

For a non-negative integer-valued random variable

General case

Ifis a non-negative random variable, then

and

wheredenotesimproper Riemann integral.

Formula for non-positive random variables

Ifis a non-positive random variable, then

and

wheredenotesimproper Riemann integral.
This formula follows from that for the non-negative case applied to
If, in addition,is integer-valued, i.e., then

General case

Ifcan be both positive and negative, then, and the above results may be applied toandseparately.

History

The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes in a fair way between two players who have to end their game before it's properly finished. This problem had been debated for centuries, and many conflicting proposals and solutions had been suggested over the years, when it was posed in 1654 to Blaise Pascal by French writer and amateur mathematician Chevalier de Méré. Méré claimed that this problem couldn't be solved and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all. He began to discuss the problem in a now famous series of letters to Pierre de Fermat. Soon enough they both independently came up with a solution. They solved the problem in different computational ways but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution and this in turn made them absolutely convinced they had solved the problem conclusively. However, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it.[7]

Three years later, in 1657, a Dutch mathematician Christiaan Huygens, who had just visited Paris, published a treatise (see Huygens (1657)) "De ratiociniis in ludo aleæ" on probability theory. In this book he considered the problem of points and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens also extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players). In this sense this book can be seen as the first successful attempt at laying down the foundations of the theory of probability.

In the foreword to his book, Huygens wrote: "It should be said, also, that for some time some of the best mathematicians of France have occupied themselves with this kind of calculus so that no one should attribute to me the honour of the first invention. This does not belong to me. But these savants, although they put each other to the test by proposing to each other many questions difficult to solve, have hidden their methods. I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs." (cited by Edwards (2002)). Thus, Huygens learned about de Méré's Problem in 1655 during his visit to France; later on in 1656 from his correspondence with Carcavi he learned that his method was essentially the same as Pascal's; so that before his book went to press in 1657 he knew about Pascal's priority in this subject.

Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes: "That my Chance or Expectation to win any thing is worth just such a Sum, as wou'd procure me in the same Chance and Expectation at a fair Lay. ... If I expect a or b, and have an equal Chance of gaining them, my Expectation is worth a+b/2." More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly:

… this advantage in the theory of chance is the product of the sum hoped for by the probability of obtaining it; it is the partial sum which ought to result when we do not wish to run the risks of the event in supposing that the division is made proportional to the probabilities. This division is the only equitable one when all strange circumstances are eliminated; because an equal degree of probability gives an equal right for the sum hoped for. We will call this advantage mathematical hope.

The use of the letter E to denote expected value goes back to W.A. Whitworth in 1901,[8] who used a script E. The symbol has become popular since for English writers it meant "Expectation", for Germans "Erwartungswert", for Spanish "Esperanza matemática" and for French "Espérance mathématique".[9]

See also

  • Center of mass

  • Central tendency

  • Chebyshev's inequality (an inequality on location and scale parameters)

  • Conditional expectation

  • Expected value is also a key concept in economics, finance, and many other subjects

  • The general term expectation

  • Expectation value (quantum mechanics)

  • Law of total expectation –the expected value of the conditional expected value of X given Y is the same as the expected value of X.

  • Moment (mathematics)

  • Nonlinear expectation (a generalization of the expected value)

  • Wald's equation for calculating the expected value of a random number of random variables

References

[1]
Citation Linkbooks.google.comSheldon M Ross (2007). "§2.4 Expectation of a random variable". Introduction to probability models (9th ed.). Academic Press. p. 38 ff. ISBN 0-12-598062-0.
Sep 19, 2019, 5:13 PM
[2]
Citation Linkbooks.google.comRichard W Hamming (1991). "§2.5 Random variables, mean and the expected value". The art of probability for scientists and engineers. Addison–Wesley. p. 64 ff. ISBN 0-201-40686-1.
Sep 19, 2019, 5:13 PM
[3]
Citation Linkbooks.google.comRichard W Hamming (1991). "Example 8.7–1 The Cauchy distribution". The art of probability for scientists and engineers. Addison-Wesley. p. 290 ff. ISBN 0-201-40686-1. Sampling from the Cauchy distribution and averaging gets you nowhere — one sample has the same distribution as the average of 1000 samples!
Sep 19, 2019, 5:13 PM
[4]
Citation Link//doi.org/10.1145%2F581271.581274Gordon, Lawrence; Loeb, Martin (November 2002). "The Economics of Information Security Investment". ACM Transactions on Information and System Security. 5 (4): 438–457. doi:10.1145/581271.581274.
Sep 19, 2019, 5:13 PM
[5]
Citation Linkmathworld.wolfram.comExpectation Value, retrieved August 8, 2017
Sep 19, 2019, 5:13 PM
[6]
Citation Linkopenlibrary.orgPapoulis, A. (1984), Probability, Random Variables, and Stochastic Processes, New York: McGraw–Hill, pp. 139–152
Sep 19, 2019, 5:13 PM
[7]
Citation Link//doi.org/10.2307%2F2309286"Ore, Pascal and the Invention of Probability Theory". The American Mathematical Monthly. 67 (5): 409–419. 1960. doi:10.2307/2309286.
Sep 19, 2019, 5:13 PM
[8]
Citation Linkopenlibrary.orgWhitworth, W.A. (1901) Choice and Chance with One Thousand Exercises. Fifth edition. Deighton Bell, Cambridge. [Reprinted by Hafner Publishing Co., New York, 1959.]
Sep 19, 2019, 5:13 PM
[9]
Citation Linkjeff560.tripod.com"Earliest uses of symbols in probability and statistics".
Sep 19, 2019, 5:13 PM
[10]
Citation Linkwww.york.ac.ukDe ratiociniis in ludo aleæ
Sep 19, 2019, 5:13 PM
[11]
Citation Linkbooks.google.comIntroduction to probability models
Sep 19, 2019, 5:13 PM
[12]
Citation Linkbooks.google.comThe art of probability for scientists and engineers
Sep 19, 2019, 5:13 PM
[13]
Citation Linkbooks.google.comThe art of probability for scientists and engineers
Sep 19, 2019, 5:13 PM
[14]
Citation Linkdoi.org10.1145/581271.581274
Sep 19, 2019, 5:13 PM
[15]
Citation Linkmathworld.wolfram.comExpectation Value
Sep 19, 2019, 5:13 PM
[16]
Citation Linkdoi.org10.2307/2309286
Sep 19, 2019, 5:13 PM
[17]
Citation Linkjeff560.tripod.com"Earliest uses of symbols in probability and statistics"
Sep 19, 2019, 5:13 PM
[18]
Citation Linkwww.york.ac.ukDe ratiociniis in ludo aleæ
Sep 19, 2019, 5:13 PM
[19]
Citation Linken.wikipedia.orgThe original version of this page is from Wikipedia, you can edit the page right here on Everipedia.Text is available under the Creative Commons Attribution-ShareAlike License.Additional terms may apply.See everipedia.org/everipedia-termsfor further details.Images/media credited individually (click the icon for details).
Sep 19, 2019, 5:13 PM