Median
Median
The median is the value separating the higher half from the lower half of a data sample (a population or a probability distribution). For a data set, it may be thought of as the "middle" value. For example, in the data set {1, 3, 3, 6, 7, 8, 9}, the median is 6, the fourth largest, and also the fourth smallest, number in the sample. For a continuous probability distribution, the median is the value such that a number is equally likely to fall above or below it.
The median is a commonly used measure of the properties of a data set in statistics and probability theory. The basic advantage of the median in describing data compared to the mean (often simply described as the "average") is that it is not skewed so much by a small proportion of extremely large or small values, and so it may give a better idea of a "typical" value. For example, in understanding statistics like household income or assets, which vary greatly, the mean may be skewed by a small number of extremely high or low values. Median income, for example, may be a better way to suggest what a "typical" income is.
Because of this, the median is of central importance in robust statistics, as it is the most resistant statistic, having a breakdown point of 50%: so long as no more than half the data are contaminated, the median will not give an arbitrarily large or small result.
Finite set of numbers
The median of a finite list of numbers can be found by arranging all the numbers from smallest to greatest.
If there is an odd number of numbers, the middle one is picked. For example, consider the list of numbers
- 1, 3, 3, 6, 7, 8, 9
This list contains seven numbers. The median is the fourth of them, which is 6.
- 1, 2, 3, 4, 5, 6, 8, 9
Type | Description | Example | Result |
---|---|---|---|
Arithmetic mean | Sum of values of a data set divided by number of values: | (1 + 2 + 2 + 3 + 4 + 7 + 9) / 7 | 4 |
Median | Middle value separating the greater and lesser halves of a data set | 1, 2, 2, 3, 4, 7, 9 | 3 |
Mode | Most frequent value in a data set | 1, 2, 2, 3, 4, 7, 9 | 2 |
One can find the median using the Stem-and-Leaf Plot.
The median is used primarily for skewed distributions, which it summarizes differently from the arithmetic mean. Consider the multiset { 1, 2, 2, 2, 3, 14 }. The median is 2 in this case, (as is the mode), and it might be seen as a better indication of central tendency (less susceptible to the exceptionally large value in data) than the arithmetic mean of 4.
The median is a popular summary statistic used in descriptive statistics, since it is simple to understand and easy to calculate, while also giving a measure that is more robust in the presence of outlier values than is the mean. The widely cited empirical relationship between the relative locations of the mean and the median for skewed distributions is, however, not generally true.[5] There are, however, various relationships for the absolute difference between them; see below.
With an even number of observations (as shown above) no value need be exactly at the value of the median. Nonetheless, the value of the median is uniquely determined with the usual definition. A related concept, in which the outcome is forced to correspond to a member of the sample, is the medoid.
In a population, at most half have values strictly less than the median and at most half have values strictly greater than it. If each set contains less than half the population, then some of the population is exactly equal to the median. For example, if a < b < c, then the median of the list {a, b, c} is b, and, if a < b < c < d, then the median of the list {a, b, c, d} is the mean of b and c; i.e., it is (b + c)/2. As a median is based on the middle data in a set, it is not necessary to know the value of extreme results in order to calculate it. For example, in a psychology test investigating the time needed to solve a problem, if a small number of people failed to solve the problem at all in the given time a median can still be calculated.[6]
The median can be used as a measure of location when a distribution is skewed, when end-values are not known, or when one requires reduced importance to be attached to outliers, e.g., because they may be measurement errors.
A median is only defined on ordered one-dimensional data, and is independent of any distance metric. A geometric median, on the other hand, is defined in any number of dimensions.
The median is one of a number of ways of summarising the typical values associated with members of a statistical population; thus, it is a possible location parameter. The median is the 2nd quartile, 5th decile, and 50th percentile. Since the median is the same as the second quartile, its calculation is illustrated in the article on quartiles. A median can be worked out for ranked but not numerical classes (e.g. working out a median grade when students are graded from A to F), although the result might be halfway between grades if there is an even number of cases.
When the median is used as a location parameter in descriptive statistics, there are several choices for a measure of variability: the range, the interquartile range, the mean absolute deviation, and the median absolute deviation.
For practical purposes, different measures of location and dispersion are often compared on the basis of how well the corresponding population values can be estimated from a sample of data. The median, estimated using the sample median, has good properties in this regard. While it is not usually optimal if a given population distribution is assumed, its properties are always reasonably good. For example, a comparison of the efficiency of candidate estimators shows that the sample mean is more statistically efficient than the sample median when data are uncontaminated by data from heavy-tailed distributions or from mixtures of distributions, but less efficient otherwise, and that the efficiency of the sample median is higher than that for a wide range of distributions. More specifically, the median has a 64% efficiency compared to the minimum-variance mean (for large normal samples), which is to say the variance of the median will be ~50% greater than the variance of the mean—see asymptotic efficiency and references therein.
Probability distributions
For any probability distribution on the real line R with cumulative distribution function F, regardless of whether it is any kind of continuous probability distribution, in particular an absolutely continuous distribution (which has a probability density function), or a discrete probability distribution, a median is by definition any real number m that satisfies the inequalities
or, equivalently, the inequalities
in which a Lebesgue–Stieltjes integral is used. For an absolutely continuous probability distribution with probability density function ƒ, the median satisfies
Any probability distribution on R has at least one median, but in specific cases there may be more than one median. Specifically, if a probability density is zero on an interval [a, b], and the cumulative distribution function at a is 1/2, any value between a and b will also be a median.
Medians of particular distributions
The medians of certain types of distributions can be easily calculated from their parameters; furthermore, they exist even for some distributions lacking a well-defined mean, such as the Cauchy distribution:
The median of a symmetric unimodal distribution coincides with the mode.
The median of a symmetric distribution which possesses a mean μ also takes the value μ. The median of a normal distribution with mean μ and variance σ2 is μ. In fact, for a normal distribution, mean = median = mode. The median of a uniform distribution in the interval [a, b] is (a + b) / 2, which is also the mean.
The median of a Cauchy distribution with location parameter x0 and scale parameter y is x0, the location parameter.
The median of a power law distribution x−a, with exponent a > 1 is 21/(a − 1)xmin, where xmin is the minimum value for which the power law holds[8]
The median of an exponential distribution with rate parameter λ is the natural logarithm of 2 divided by the rate parameter: λ−1ln 2.
The median of a Weibull distribution with shape parameter k and scale parameter λ is λ(ln 2)1/k.
Populations
Optimality property
The mean absolute error of a real variable c with respect to the random variable X is
Provided that the probability distribution of X is such that the above expectation exists, then m is a median of X if and only if m is a minimizer of the mean absolute error with respect to X.[9] In particular, m is a sample median if and only if m minimizes the arithmetic mean of the absolute deviations.
More generally, a median is defined as a minimum of
as discussed below in the section on multivariate medians (specifically, the spatial median).
This optimization-based definition of the median is useful in statistical data-analysis, for example, in k-medians clustering.
Unimodal distributions
where |·| is the absolute value.
A similar relation holds between the median and the mode: they lie within 31/2 ≈ 1.732 standard deviations of each other:
Inequality relating means and medians
If the distribution has finite variance, then the distance between the median and the mean is bounded by one standard deviation.
This bound was proved by Mallows,[11] who used Jensen's inequality twice, as follows. We have
The first and third inequalities come from Jensen's inequality applied to the absolute-value function and the square function, which are each convex. The second inequality comes from the fact that a median minimizes the absolute deviation function
Jensen's inequality for medians
Jensen's inequality states that for any random variable x with a finite expectation E(x) and for any convex function f
It has been shown[16] that if x is a real variable with a unique median m and f is a C function then
A C function is a real valued function, defined on the set of real numbers R, with the property that for any real t
is a closed interval, a singleton or an empty set.
Medians for samples
The sample median
Efficient computation of the sample median
Even though comparison-sorting n items requires Ω(n log n) operations, selection algorithms can compute the k'th-smallest of n items with only Θ(n) operations. This includes the median, which is the n/2'th order statistic (or for an even number of samples, the arithmetic mean of the two middle order statistics).
Selection algorithms still have the downside of requiring Ω(n) memory, that is, they need to have the full sample (or a linear-sized portion of it) in memory. Because this, as well as the linear time requirement, can be prohibitive, several estimation procedures for the median have been developed. A simple one is the median of three rule, which estimates the median as the median of a three-element subsample; this is commonly used as a subroutine in the quicksort sorting algorithm, which uses an estimate of its input's median. A more robust estimator is Tukey's ninther, which is the median of three rule applied with limited recursion:[17] if A is the sample laid out as an array, and
- med3(A) = median(A[1], A[n/2], A[n]),
then
- ninther(A) = med3(med3(A[1 ...1/3n]), med3(A[1/3n ...2/3n]), med3(A[2/3n ... n]))
The remedian is an estimator for the median that requires linear time but sub-linear memory, operating in a single pass over the sample.[18]
Easy explanation of the sample median
In individual series (if number of observation is very low) first one must arrange all the observations in order. Then count(n) is the total number of observation in given data.
If n then Median (M) = value of ((n + 1)/2)th item term.
If n then Median (M) = value of [(n/2)th item term + (n/2 + 1)th item term]/2
- For an odd number of values
As an example, we will calculate the sample median for the following set of observations: 1, 5, 2, 8, 7.
Start by sorting the values: 1, 2, 5, 7, 8.
In this case, the median is 5 since it is the middle observation in the ordered list.
The median is the ((n + 1)/2)th item, where n is the number of values. For example, for the list {1, 2, 5, 7, 8}, we have n = 5, so the median is the ((5 + 1)/2)th item.
- median = (6/2)th itemmedian = 3rd itemmedian = 5
- For an even number of values
As an example, we will calculate the sample median for the following set of observations: 1, 6, 2, 8, 7, 2.
Start by sorting the values: 1, 2, 2, 6, 7, 8.
In this case, the arithmetic mean of the two middlemost terms is (2 + 6)/2 = 4. Therefore, the median is 4 since it is the arithmetic mean of the middle observations in the ordered list.
Sampling distribution
v | 0 | 0.5 | 1 | 1.5 | 2 | 2.5 | 3 | 3.5 | 4 | 4.5 | 5 |
---|---|---|---|---|---|---|---|---|---|---|---|
f(v) | 0.000 | 0.008 | 0.010 | 0.013 | 0.083 | 0.108 | 0.328 | 0.220 | 0.202 | 0.023 | 0.005 |
F(v) | 0.000 | 0.008 | 0.018 | 0.031 | 0.114 | 0.222 | 0.550 | 0.770 | 0.972 | 0.995 | 1.000 |
Sample size Statistic | 3 | 9 | 15 | 21 |
---|---|---|---|---|
Expected value of median | 3.198 | 3.191 | 3.174 | 3.161 |
Standard error of median (above formula) | 0.482 | 0.305 | 0.257 | 0.239 |
Standard error of median (asymptotic approximation) | 0.879 | 0.508 | 0.393 | 0.332 |
Standard error of mean | 0.421 | 0.243 | 0.188 | 0.159 |
The expected value of the median falls slightly as sample size increases while, as would be expected, the standard errors of both the median and the mean are proportionate to the inverse square root of the sample size. The asymptotic approximation errs on the side of caution by overestimating the standard error.
- Estimation of variance from sample data
- Efficiency
Other estimators
For univariate distributions that are symmetric about one median, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the population median.[27]
If data are represented by a statistical model specifying a particular family of probability distributions, then estimates of the median can be obtained by fitting that family of probability distributions to the data and calculating the theoretical median of the fitted distribution. Pareto interpolation is an application of this when the population is assumed to have a Pareto distribution.
Coefficient of dispersion
where n is the sample size, m is the sample median and x is a variate. The sum is taken over the whole sample.
Confidence intervals for a two-sample test in which the sample sizes are large have been derived by Bonett and Seier.[28] This test assumes that both samples have the same median but differ in the dispersion around it. The confidence interval (CI) is bounded inferiorly by
where tj is the mean absolute deviation of the jth sample, var() is the variance and zα is the value from the normal distribution for the chosen value of α: for α = 0.05, zα = 1.96. The following formulae are used in the derivation of these confidence intervals
where r is the Pearson correlation coefficient between the squared deviation scores
- and
a and b here are constants equal to 1 and 2, x is a variate and s is the standard deviation of the sample.
Multivariate median
Previously, this article discussed the univariate median, when the sample or population had one-dimension. When the dimension is two or higher, there are multiple concepts that extend the definition of the univariate median; each such multivariate median agrees with the univariate median when the dimension is exactly one.[27][32][33][34]
Marginal median
Centerpoint
An alternative generalization of the median in higher dimensions is the centerpoint.
Other median-related concepts
Interpolated median
Pseudo-median
For univariate distributions that are symmetric about one median, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the population median; for non-symmetric distributions, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the population pseudo-median, which is the median of a symmetrized distribution and which is close to the population median.[36] The Hodges–Lehmann estimator has been generalized to multivariate distributions.[37]
Variants of regression
Median filter
In the context of image processing of monochrome raster images there is a type of noise, known as the salt and pepper noise, when each pixel independently becomes black (with some small probability) or white (with some small probability), and is unchanged otherwise (with the probability close to 1). An image constructed of median values of neighborhoods (like 3×3 square) can effectively reduce noise in this case.
Cluster analysis
In cluster analysis, the k-medians clustering algorithm provides a way of defining clusters, in which the criterion of maximising the distance between cluster-means that is used in k-means clustering, is replaced by maximising the distance between cluster-medians.
Median–median line
Nair and Shrivastava in 1942 suggested a similar idea but instead advocated dividing the sample into three equal parts before calculating the means of the subsamples.[40] Brown and Mood in 1951 proposed the idea of using the medians of two subsamples rather the means.[41] Tukey combined these ideas and recommended dividing the sample into three equal size subsamples and estimating the line based on the medians of the subsamples.[42]
Median-unbiased estimators
Any mean-unbiased estimator minimizes the risk (expected loss) with respect to the squared-error loss function, as observed by Gauss. A median-unbiased estimator minimizes the risk with respect to the absolute-deviation loss function, as observed by Laplace. Other loss functions are used in statistical theory, particularly in robust statistics.
An estimate of a one-dimensional parameter θ will be said to be median-unbiased if, for fixed θ, the median of the distribution of the estimate is at the value θ; i.e., the estimate underestimates just as often as it overestimates. This requirement seems for most purposes to accomplish as much as the mean-unbiased requirement and has the additional property that it is invariant under one-to-one transformation.— page 584
Further properties of median-unbiased estimators have been reported.[44][45][46][47] Median-unbiased estimators are invariant under one-to-one transformations.
There are methods of constructing median-unbiased estimators that are optimal (in a sense analogous to the minimum-variance property for mean-unbiased estimators). Such constructions exist for probability distributions having monotone likelihood-functions.[48][49] One such procedure is an analogue of the Rao–Blackwell procedure for mean-unbiased estimators: The procedure holds for a smaller class of probability distributions than does the Rao—Blackwell procedure but for a larger class of loss functions.[50]
History
The idea of the median also appeared later in Edward Wright's book on navigation (Certaine Errors in Navigation) in 1599 in a section concerning the determination of location with a compass. Wright felt that this value was the most likely to be the correct value in a series of observations.
In 1757, Roger Joseph Boscovich developed a regression method based on the L1 norm and therefore implicitly on the median.[53]
Antoine Augustin Cournot in 1843 was the first[56] to use the term median (valeur médiane) for the value that divides a probability distribution into two equal halves. Gustav Theodor Fechner used the median (Centralwerth) in sociological and psychological phenomena.[57] It had earlier been used only in astronomy and related fields. Gustav Fechner popularized the median into the formal analysis of data, although it had been used previously by Laplace.[57]
See also
Medoids which are a generalisation of the median in higher dimensions
Central tendency Mean Mode
Absolute deviation
Bias of an estimator
Concentration of measure for Lipschitz functions
Median (geometry)
Median graph
Median search
Median slope
Median voter theory
Weighted median