Markov's inequality

Markov's inequality

In probability theory, Markov's inequality gives an upper bound for the probability that a non-negative function of a random variable is greater than or equal to some positive constant. It is named after the Russian mathematician Andrey Markov, although it appeared earlier in the work of Pafnuty Chebyshev (Markov's teacher), and many sources, especially in analysis, refer to it as Chebyshev's inequality (sometimes, calling it the first Chebyshev inequality, while referring to Chebyshev's inequality as the second Chebyshev's inequality) or Bienaymé's inequality.
Markov's inequality (and other similar inequalities) relate probabilities to expectations, and provide (frequently loose but still useful) bounds for the cumulative distribution function of a random variable.
Statement
If X is a nonnegative random variable and a > 0, then the probability that X is at least a is at most the expectation of X divided by a:[1]
In the language of measure theory, Markov's inequality states that if (X, Σ, μ) is a measure space, f a measurable extended real-valued function, and ε > 0, then
This measure-theoretic definition is sometimes referred to as Chebyshev's inequality.[2]
Extended version for monotonically increasing functions
If φ is a monotonically increasing nonnegative function for the nonnegative reals, X is a random variable, a ≥ 0, and φ(a) > 0, then
An immediate corollary, using higher moments of X supported on values larger than 0, is
Proofs
We separate the case in which the measure space is a probability space from the more general case because the probability case is more accessible for the general reader.
Intuitive
Proof in the language of probability theory
Method 1: From the definition of expectation:
However, X is a non-negative random variable thus,
From this we can derive,
From here it is easy to see that
Now, using linearity of expectations, the left side of this inequality is the same as
Thus we have
and since a > 0, we can divide both sides by a.
In the language of measure theory
Corollaries
Chebyshev's inequality
Chebyshev's inequality uses the variance to bound the probability that a random variable deviates far from the mean. Specifically,
for any a > 0. Here Var(X) is the variance of X, defined as:
Chebyshev's inequality follows from Markov's inequality by considering the random variable
This argument can be summarized (where "MI" indicates use of Markov's inequality):
Other corollaries
The "monotonic" result can be demonstrated by:
The result that, for a nonnegative random variable X, the quantile function of X satisfies:
the proof using
Let be a self-adjoint matrix-valued random variable and a > 0. Then
can be shown in a similar manner.
Examples
Assuming no income is negative, Markov's inequality shows that no more than 1/5 of the population can have more than 5 times the average income.
See also
Paley–Zygmund inequality – a corresponding lower bound
Concentration inequality – a summary of tail-bounds on random variables.