Everipedia Logo
Everipedia is now IQ.wiki - Join the IQ Brainlist and our Discord for early access to editing on the new platform and to participate in the beta testing.
Pearson's chi-squared test

Pearson's chi-squared test

Pearson's chi-squared test (χ2) is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance. It is the most widely used of many chi-squared tests (e.g., Yates, likelihood ratio, portmanteau test in time series, etc.) – statistical procedures whose results are evaluated by reference to the chi-squared distribution. Its properties were first investigated by Karl Pearson in 1900.[1] In contexts where it is important to improve a distinction between the test statistic and its distribution, names similar to Pearson χ-squared test or statistic are used.

It tests a null hypothesis stating that the frequency distribution of certain events observed in a sample is consistent with a particular theoretical distribution. The events considered must be mutually exclusive and have total probability 1. A common case for this is where the events each cover an outcome of a categorical variable. A simple example is the hypothesis that an ordinary six-sided die is "fair" (i. e., all six outcomes are equally likely to occur.)

Upper-tail critical values of chi-square distribution [3]
Degrees
of
freedom
Probability less than the critical value
0.900.950.9750.990.999
12.7063.8415.0246.63510.828
24.6055.9917.3789.21013.816
36.2517.8159.34811.34516.266
47.7799.48811.14313.27718.467
59.23611.07012.83315.08620.515
610.64512.59214.44916.81222.458
712.01714.06716.01318.47524.322
813.36215.50717.53520.09026.125
914.68416.91919.02321.66627.877
1015.98718.30720.48323.20929.588
1117.27519.67521.92024.72531.264
1218.54921.02623.33726.21732.910
1319.81222.36224.73627.68834.528
1421.06423.68526.11929.14136.123
1522.30724.99627.48830.57837.697
1623.54226.29628.84532.00039.252
1724.76927.58730.19133.40940.790
1825.98928.86931.52634.80542.312
1927.20430.14432.85236.19143.820
2028.41231.41034.17037.56645.315
2129.61532.67135.47938.93246.797
2230.81333.92436.78140.28948.268
2332.00735.17238.07641.63849.728
2433.19636.41539.36442.98051.179
2534.38237.65240.64644.31452.620
2635.56338.88541.92345.64254.052
2736.74140.11343.19546.96355.476
2837.91641.33744.46148.27856.892
2939.08742.55745.72249.58858.301
3040.25643.77346.97950.89259.703
3141.42244.98548.23252.19161.098
3242.58546.19449.48053.48662.487
3343.74547.40050.72554.77663.870
3444.90348.60251.96656.06165.247
3546.05949.80253.20357.34266.619
3647.21250.99854.43758.61967.985
3748.36352.19255.66859.89369.347
3849.51353.38456.89661.16270.703
3950.66054.57258.12062.42872.055
4051.80555.75859.34263.69173.402
4152.94956.94260.56164.95074.745
4254.09058.12461.77766.20676.084
4355.23059.30462.99067.45977.419
4456.36960.48164.20168.71078.750
4557.50561.65665.41069.95780.077
4658.64162.83066.61771.20181.400
4759.77464.00167.82172.44382.720
4860.90765.17169.02373.68384.037
4962.03866.33970.22274.91985.351
5063.16767.50571.42076.15486.661
5164.29568.66972.61677.38687.968
5265.42269.83273.81078.61689.272
5366.54870.99375.00279.84390.573
5467.67372.15376.19281.06991.872
5568.79673.31177.38082.29293.168
5669.91974.46878.56783.51394.461
5771.04075.62479.75284.73395.751
5872.16076.77880.93685.95097.039
5973.27977.93182.11787.16698.324
6074.39779.08283.29888.37999.607
6175.51480.23284.47689.591100.888
6276.63081.38185.65490.802102.166
6377.74582.52986.83092.010103.442
6478.86083.67588.00493.217104.716
6579.97384.82189.17794.422105.988
6681.08585.96590.34995.626107.258
6782.19787.10891.51996.828108.526
6883.30888.25092.68998.028109.791
6984.41889.39193.85699.228111.055
7085.52790.53195.023100.425112.317
7186.63591.67096.189101.621113.577
7287.74392.80897.353102.816114.835
7388.85093.94598.516104.010116.092
7489.95695.08199.678105.202117.346
7591.06196.217100.839106.393118.599
7692.16697.351101.999107.583119.850
7793.27098.484103.158108.771121.100
7894.37499.617104.316109.958122.348
7995.476100.749105.473111.144123.594
8096.578101.879106.629112.329124.839
8197.680103.010107.783113.512126.083
8298.780104.139108.937114.695127.324
8399.880105.267110.090115.876128.565
84100.980106.395111.242117.057129.804
85102.079107.522112.393118.236131.041
86103.177108.648113.544119.414132.277
87104.275109.773114.693120.591133.512
88105.372110.898115.841121.767134.746
89106.469112.022116.989122.942135.978
90107.565113.145118.136124.116137.208
91108.661114.268119.282125.289138.438
92109.756115.390120.427126.462139.666
93110.850116.511121.571127.633140.893
94111.944117.632122.715128.803142.119
95113.038118.752123.858129.973143.344
96114.131119.871125.000131.141144.567
97115.223120.990126.141132.309145.789
98116.315122.108127.282133.476147.010
99117.407123.225128.422134.642148.230
100118.498124.342129.561135.807149.449

Definition

Pearson's chi-squared test is used to assess three types of comparison: goodness of fit, homogeneity, and independence.

  • A test of goodness of fit establishes whether an observed frequency distribution differs from a theoretical distribution.

  • A test of homogeneity compares the distribution of counts for two or more groups using the same categorical variable (e.g. choice of activity—college, military, employment, travel—of graduates of a high school reported a year after graduation, sorted by graduation year, to see if number of graduates choosing a given activity has changed from class to class, or from decade to decade).[2]

  • A test of independence assesses whether observations consisting of measures on two variables, expressed in a contingency table, are independent of each other (e.g. polling responses from people of different nationalities to see if one's nationality is related to the response).

For all three tests, the computational procedure includes the following steps:

  1. Calculate the chi-squared test statistic, χ², which resembles a normalized sum of squared deviations between observed and theoretical frequencies (see below).

  2. Determine the degrees of freedom, df, of that statistic. For a test of goodness-of-fit, df = Cats − Parms, where Cats is the number of observation categories recognized by the model, and Parms is the number of parameters in the model adjusted to make the model best fit the observations: The number of categories reduced by the number of fitted parameters in the distribution. For test of homogeneity, df = (Rows − 1)×(Cols − 1), where Rows corresponds to the number of categories (i.e. rows in the associated contingency table), and Cols corresponds the number of independent groups (i.e. columns in the associated contingency table).[2] For test of independence, df = (Rows − 1)×(Cols − 1), where in this case, Rows corresponds to number of categories in one variable, and Cols corresponds to number of categories in the second variable.[2]

  3. Select a desired level of confidence (significance level, p-value or the corresponding alpha level) for the result of the test.

  4. Compare χ² to the critical value from the chi-squared distribution with df degrees of freedom and the selected confidence level (one-sided since the test is only one direction, i.e. is the test value greater than the critical value?), which in many cases gives a good approximation of the distribution of χ².

  5. Sustain or reject the null hypothesis that the observed frequency distribution is the same as the theoretical distribution based on whether the test statistic exceeds the critical value of χ². If the test statistic exceeds the critical value of χ², the null hypothesis ( = there is no difference between the distributions) can be rejected, and the alternative hypothesis ( = there is a difference between the distributions) can be accepted, both with the selected level of confidence. If the test statistic falls below the threshold χ² value, then no clear conclusion can be reached, and the null hypothesis is sustained (we failed to reject the null hypothesis), but not necessarily accepted.

Test for fit of a distribution

Discrete uniform distribution

In this caseobservations are divided amongcells. A simple application is to test the hypothesis that, in the general population, values would occur in each cell with equal frequency. The "theoretical frequency" for any cell (under the null hypothesis of adiscrete uniform distribution) is thus calculated as
and the reduction in the degrees of freedom is, notionally because the observed frequenciesare constrained to sum to.

One specific example of its application would be its application for log-rank test.

Other distributions

When testing whether observations are random variables whose distribution belongs to a given family of distributions, the "theoretical frequencies" are calculated using a distribution from that family fitted in some standard way. The reduction in the degrees of freedom is calculated as, whereis the number ofco-variatesused in fitting the distribution. For instance, when checking a three-co-variate Weibull distribution,, and when checking a normal distribution (where the parameters are mean and standard deviation),, and when checking a Poisson distribution (where the parameter is the expected value),. Thus, there will bedegrees of freedom, whereis the number of categories.

The degrees of freedom are not based on the number of observations as with a Student's t or F-distribution. For example, if testing for a fair, six-sided die, there would be five degrees of freedom because there are six categories/parameters (each number). The number of times the die is rolled does not influence the number of degrees of freedom.

Calculating the test-statistic

Upper-tail critical values of chi-square distribution [3]
Degrees
of
freedom
Probability less than the critical value
0.900.950.9750.990.999
12.7063.8415.0246.63510.828
24.6055.9917.3789.21013.816
36.2517.8159.34811.34516.266
47.7799.48811.14313.27718.467
59.23611.07012.83315.08620.515
610.64512.59214.44916.81222.458
712.01714.06716.01318.47524.322
813.36215.50717.53520.09026.125
914.68416.91919.02321.66627.877
1015.98718.30720.48323.20929.588
1117.27519.67521.92024.72531.264
1218.54921.02623.33726.21732.910
1319.81222.36224.73627.68834.528
1421.06423.68526.11929.14136.123
1522.30724.99627.48830.57837.697
1623.54226.29628.84532.00039.252
1724.76927.58730.19133.40940.790
1825.98928.86931.52634.80542.312
1927.20430.14432.85236.19143.820
2028.41231.41034.17037.56645.315
2129.61532.67135.47938.93246.797
2230.81333.92436.78140.28948.268
2332.00735.17238.07641.63849.728
2433.19636.41539.36442.98051.179
2534.38237.65240.64644.31452.620
2635.56338.88541.92345.64254.052
2736.74140.11343.19546.96355.476
2837.91641.33744.46148.27856.892
2939.08742.55745.72249.58858.301
3040.25643.77346.97950.89259.703
3141.42244.98548.23252.19161.098
3242.58546.19449.48053.48662.487
3343.74547.40050.72554.77663.870
3444.90348.60251.96656.06165.247
3546.05949.80253.20357.34266.619
3647.21250.99854.43758.61967.985
3748.36352.19255.66859.89369.347
3849.51353.38456.89661.16270.703
3950.66054.57258.12062.42872.055
4051.80555.75859.34263.69173.402
4152.94956.94260.56164.95074.745
4254.09058.12461.77766.20676.084
4355.23059.30462.99067.45977.419
4456.36960.48164.20168.71078.750
4557.50561.65665.41069.95780.077
4658.64162.83066.61771.20181.400
4759.77464.00167.82172.44382.720
4860.90765.17169.02373.68384.037
4962.03866.33970.22274.91985.351
5063.16767.50571.42076.15486.661
5164.29568.66972.61677.38687.968
5265.42269.83273.81078.61689.272
5366.54870.99375.00279.84390.573
5467.67372.15376.19281.06991.872
5568.79673.31177.38082.29293.168
5669.91974.46878.56783.51394.461
5771.04075.62479.75284.73395.751
5872.16076.77880.93685.95097.039
5973.27977.93182.11787.16698.324
6074.39779.08283.29888.37999.607
6175.51480.23284.47689.591100.888
6276.63081.38185.65490.802102.166
6377.74582.52986.83092.010103.442
6478.86083.67588.00493.217104.716
6579.97384.82189.17794.422105.988
6681.08585.96590.34995.626107.258
6782.19787.10891.51996.828108.526
6883.30888.25092.68998.028109.791
6984.41889.39193.85699.228111.055
7085.52790.53195.023100.425112.317
7186.63591.67096.189101.621113.577
7287.74392.80897.353102.816114.835
7388.85093.94598.516104.010116.092
7489.95695.08199.678105.202117.346
7591.06196.217100.839106.393118.599
7692.16697.351101.999107.583119.850
7793.27098.484103.158108.771121.100
7894.37499.617104.316109.958122.348
7995.476100.749105.473111.144123.594
8096.578101.879106.629112.329124.839
8197.680103.010107.783113.512126.083
8298.780104.139108.937114.695127.324
8399.880105.267110.090115.876128.565
84100.980106.395111.242117.057129.804
85102.079107.522112.393118.236131.041
86103.177108.648113.544119.414132.277
87104.275109.773114.693120.591133.512
88105.372110.898115.841121.767134.746
89106.469112.022116.989122.942135.978
90107.565113.145118.136124.116137.208
91108.661114.268119.282125.289138.438
92109.756115.390120.427126.462139.666
93110.850116.511121.571127.633140.893
94111.944117.632122.715128.803142.119
95113.038118.752123.858129.973143.344
96114.131119.871125.000131.141144.567
97115.223120.990126.141132.309145.789
98116.315122.108127.282133.476147.010
99117.407123.225128.422134.642148.230
100118.498124.342129.561135.807149.449

The value of the test-statistic is

where

= Pearson's cumulative test statistic, which asymptotically approaches adistribution.= the number of observations of type i.= total number of observations= the expected (theoretical) count of type i, asserted by the null hypothesis that the fraction of type i in the population is= the number of cells in the table.
The chi-squared statistic can then be used to calculate ap-valuebycomparing the value of the statisticto achi-squared distribution. The number ofdegrees of freedomis equal to the number of cells, minus the reduction in degrees of freedom,.
The result about the numbers of degrees of freedom is valid when the original data are multinomial and hence the estimated parameters are efficient for minimizing the chi-squared statistic. More generally however, when maximum likelihood estimation does not coincide with minimum chi-squared estimation, the distribution will lie somewhere between a chi-squared distribution withanddegrees of freedom (See for instance Chernoff and Lehmann, 1954).

Bayesian method

In Bayesian statistics, one would instead use a Dirichlet distribution as conjugate prior. If one took a uniform prior, then the maximum likelihood estimate for the population probability is the observed probability, and one may compute a credible region around this or another estimate.

Testing for statistical independence

In this case, an "observation" consists of the values of two outcomes and the null hypothesis is that the occurrence of these outcomes is statistically independent. Each observation is allocated to one cell of a two-dimensional array of cells (called a contingency table) according to the values of the two outcomes. If there are r rows and c columns in the table, the "theoretical frequency" for a cell, given the hypothesis of independence, is

whereis the total sample size (the sum of all cells in the table), and

is the fraction of observations of type i ignoring the column attribute (fraction of row totals), and

is the fraction of observations of type j ignoring the row attribute (fraction of column totals). The term "frequencies" refers to absolute numbers rather than already normalised values.

The value of the test-statistic is

Note thatis 0 if and only if, i.e. only if the expected and true number of observations are equal in all cells.

Fitting the model of "independence" reduces the number of degrees of freedom by p = r + c − 1. The number of degrees of freedom is equal to the number of cells rc, minus the reduction in degrees of freedom, p, which reduces to (r − 1)(c − 1).

For the test of independence, also known as the test of homogeneity, a chi-squared probability of less than or equal to 0.05 (or the chi-squared statistic being at or larger than the 0.05 critical point) is commonly interpreted by applied workers as justification for rejecting the null hypothesis that the row variable is independent of the column variable.[4] The alternative hypothesis corresponds to the variables having an association or relationship where the structure of this relationship is not specified.

Assumptions

The chi-squared test, when used with the standard approximation that a chi-squared distribution is applicable, has the following assumptions:

Simple random sampleThe sample data is a random sampling from a fixed distribution or population where every collection of members of the population of the given sample size has an equal probability of selection. Variants of the test have been developed for complex samples, such as where the data is weighted. Other forms can be used such aspurposive sampling.[5]Sample size (whole table)A sample with a sufficiently large size is assumed. If a chi squared test is conducted on a sample with a smaller size, then the chi squared test will yield an inaccurate inference. The researcher, by using chi squared test on small samples, might end up committing aType II error.Expected cell countAdequate expected cell counts. Some require 5 or more, and others require 10 or more. A common rule is 5 or more in all cells of a 2-by-2 table, and 5 or more in 80% of cells in larger tables, but no cells with zero expected count. When this assumption is not met,Yates's correctionis applied.IndependenceThe observations are always assumed to be independent of each other. This means chi-squared cannot be used to test correlated data (like matched pairs or panel data). In those cases,McNemar's testmay be more appropriate.

A test that relies on different assumptions is Fisher's exact test; if its assumption of fixed marginal distributions is met it is substantially more accurate in obtaining a significance level, especially with few observations. In the vast majority of applications this assumption will not be met, and Fisher's exact test will be over conservative and not have correct coverage.[6]

Derivation

The null distribution of the Pearson statistic with j rows and k columns is approximated by the chi-squared distribution with (k − 1)(j − 1) degrees of freedom.[7]

This approximation arises as the true distribution, under the null hypothesis, if the expected value is given by a multinomial distribution. For large sample sizes, the central limit theorem says this distribution tends toward a certain multivariate normal distribution.

Two cells

In the special case where there are only two cells in the table, the expected values follow a binomial distribution,

where

p = probability, under the null hypothesis,n = number of observations in the sample.

In the above example the hypothesised probability of a male observation is 0.5, with 100 samples. Thus we expect to observe 50 males.

If n is sufficiently large, the above binomial distribution may be approximated by a Gaussian (normal) distribution and thus the Pearson test statistic approximates a chi-squared distribution,

Let O1 be the number of observations from the sample that are in the first cell. The Pearson test statistic can be expressed as

which can in turn be expressed as

By the normal approximation to a binomial this is the squared of one standard normal variate, and hence is distributed as chi-squared with 1 degree of freedom. Note that the denominator is one standard deviation of the Gaussian approximation, so can be written

So as consistent with the meaning of the chi-squared distribution, we are measuring how probable the observed number of standard deviations away from the mean is under the Gaussian approximation (which is a good approximation for large n).

The chi-squared distribution is then integrated on the right of the statistic value to obtain the P-value, which is equal to the probability of getting a statistic equal or bigger than the observed one, assuming the null hypothesis.

Two-by-two contingency tables

When the test is applied to a contingency table containing two rows and two columns, the test is equivalent to a Z-test of proportions.

Many cells

Similar arguments as above lead to the desired result. Each cell (except the final one, whose value is completely determined by the others) is treated as an independent binomial variable, and their contributions are summed and each contributes one degree of freedom.

Let us now prove that the distribution indeed approaches asymptotically thedistribution as the number of observations approaches infinity.
Letbe the number of observations,the number of cells andthe probability of an observation to fall in the i-th cell, for. We denote bythe configuration where for each i there areobservations in the i-th cell. Note that
Letbe Pearson's cumulative test statistic for such a configuration, and letbe the distribution of this statistic. We will show that the latter probability approaches thedistribution withdegrees of freedom, as

For any arbitrary value T:

We will use a procedure similar to the approximation inde Moivre–Laplace theorem. Contributions from smallare of subleading order inand thus for largewe may useStirling's formulafor bothandto get the following:

By substituting for

we may approximate for largethe sum over theby an integral over the. Noting that:

we arrive at

Byexpandingthe logarithm and taking the leading terms in, we get
Now, it should be noted that Pearson's chi,, is precisely the argument of the exponent (except for the -1/2; note that the final term in the exponent's argument is equal to).

This argument can be written as:

is a regular symmetricmatrix, and hencediagonalizable. It is therefore possible to make a linear change of variables inso as to getnew variablesso that:

This linear change of variables merely multiplies the integral by a constant Jacobian, so we get:

Where C is a constant.

This is the probability that squared sum ofindependent normally distributed variables of zero mean and unit variance will be greater than T, namely thatwithdegrees of freedom is larger than T.
We have thus shown that at the limit wherethe distribution of Pearson's chi approaches the chi distribution withdegrees of freedom.

Examples

Fairness of dice

A 6-sided dice is thrown 60 times. The number of times it lands with 1, 2, 3, 4, 5 and 6 face up is 5, 8, 9, 8, 10 and 20, respectively. Is the die biased, according to the Pearson's chi-squared test at a significance level of 95% and/or 99%?

n = 6 as there are 6 possible outcomes, 1 to 6. The null hypothesis is that the die is unbiased, hence each number is expected to occur the same number of times, in this case, 60/n = 10. The outcomes can be tabulated as follows:

i*Oi
*Ei
*Oi* −*Ei
(*Oi* −*Ei* )2(*Oi* −*Ei* )2/*Ei
1510−5252.5
2810−240.4
3910−110.1
4810−240.4
51010000
620101010010
Sum13.4

The number of degrees of freedom is n − 1 = 5. The Upper-tail critical values of chi-square distribution table gives a critical value of 11.070 at 95% significance level:

Degrees
of
freedom
Probability less than the critical value
0.900.950.9750.990.999
59.23611.07012.83315.08620.515

As the chi-squared statistic of 13.4 exceeds this critical value, we reject the null hypothesis and conclude that the die is biased at 95% significance level.

At 99% significance level, the critical value is 15.086. As the chi-squared statistic does not exceed it, we fail to reject the null hypothesis and thus conclude that there is insufficient evidence to show that the die is biased at 99% significance level.

Goodness of fit

In this context, thefrequenciesof both theoretical and empirical distributions are unnormalised counts, and for a chi-squared test the total sample sizesof both these distributions (sums of all cells of the correspondingcontingency tables) have to be the same.

For example, to test the hypothesis that a random sample of 100 people has been drawn from a population in which men and women are equal in frequency, the observed number of men and women would be compared to the theoretical frequencies of 50 men and 50 women. If there were 44 men in the sample and 56 women, then

If the null hypothesis is true (i.e., men and women are chosen with equal probability), the test statistic will be drawn from a chi-squared distribution with one degree of freedom (because if the male frequency is known, then the female frequency is determined).

Consultation of the chi-squared distribution for 1 degree of freedom shows that the probability of observing this difference (or a more extreme difference than this) if men and women are equally numerous in the population is approximately 0.23. This probability is higher than conventional criteria for statistical significance (0.01 or 0.05), so normally we would not reject the null hypothesis that the number of men in the population is the same as the number of women (i.e., we would consider our sample within the range of what we would expect for a 50/50 male/female ratio.)

Problems

The approximation to the chi-squared distribution breaks down if expected frequencies are too low. It will normally be acceptable so long as no more than 20% of the events have expected frequencies below 5. Where there is only 1 degree of freedom, the approximation is not reliable if expected frequencies are below 10. In this case, a better approximation can be obtained by reducing the absolute value of each difference between observed and expected frequencies by 0.5 before squaring; this is called Yates's correction for continuity.

In cases where the expected value, E, is found to be small (indicating a small underlying population probability, and/or a small number of observations), the normal approximation of the multinomial distribution can fail, and in such cases it is found to be more appropriate to use the G-test, a likelihood ratio-based test statistic. When the total sample size is small, it is necessary to use an appropriate exact test, typically either the binomial test or (for contingency tables) Fisher's exact test. This test uses the conditional distribution of the test statistic given the marginal totals; however, it does not assume that the data were generated from an experiment in which the marginal totals are fixed and is valid whether or not that is the case.

It can be shown that thetest is a low order approximation of thetest.[8] The above reasons for the above issues become apparent when the higher order terms are investigated.

See also

  • G-test, test to which chi-squared test is an approximation

  • Degrees of freedom (statistics)

  • Fisher's exact test

  • Median test

  • Lexis ratio, earlier statistic, replaced by chi-squared

  • Chi-squared nomogram

  • Deviance (statistics), another measure of the quality of fit

  • Mann–Whitney U test

  • Cramér's V – a measure of correlation for the chi-squared test

  • Minimum chi-square estimation

References

[1]
Citation Link//doi.org/10.1080%2F14786440009463897Pearson, Karl (1900). "On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling" (PDF). Philosophical Magazine. Series 5. 50 (302): 157–175. doi:10.1080/14786440009463897.
Oct 1, 2019, 6:27 AM
[2]
Citation Linkopenlibrary.orgDavid E. Bock, Paul F. Velleman, Richard D. De Veaux (2007). "Stats, Modeling the World," pp. 606-627, Pearson Addison Wesley, Boston, ISBN 0-13-187621-X
Oct 1, 2019, 6:27 AM
[3]
Citation Linkwww.itl.nist.gov"1.3.6.7.4. Critical Values of the Chi-Square Distribution". Retrieved 14 October 2014.
Oct 1, 2019, 6:27 AM
[4]
Citation Linkwww.itl.nist.gov"Critical Values of the Chi-Squared Distribution". NIST/SEMATECH e-Handbook of Statistical Methods. National Institute of Standards and Technology.
Oct 1, 2019, 6:27 AM
[5]
Citation Linkopenlibrary.orgSee Field, Andy. Discovering Statistics Using SPSS. for assumptions on Chi Square.
Oct 1, 2019, 6:27 AM
[6]
Citation Linkwww.stat.columbia.edu"A Bayesian Formulation for Exploratory Data Analysis and Goodness-of-Fit Testing" (PDF). International Statistical Review. p. 375.
Oct 1, 2019, 6:27 AM
[7]
Citation Linkocw.mit.eduStatistics for Applications. MIT OpenCourseWare. Lecture 23. Pearson's Theorem. Retrieved 21 March 2007.
Oct 1, 2019, 6:27 AM
[8]
Citation Linkwww-biba.inrialpes.frJaynes, E.T. (2003). Probability Theory: The Logic of Science. C. University Press. p. 298. ISBN 978-0-521-59271-0. (Link is to a fragmentary edition of March 1996.)
Oct 1, 2019, 6:27 AM
[9]
Citation Link//doi.org/10.1214%2Faoms%2F117772872610.1214/aoms/1177728726
Oct 1, 2019, 6:27 AM
[10]
Citation Link//doi.org/10.2307%2F140273110.2307/1402731
Oct 1, 2019, 6:27 AM
[11]
Citation Link//www.jstor.org/stable/14027311402731
Oct 1, 2019, 6:27 AM
[12]
Citation Linkwww.economics.soton.ac.uk"On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling"
Oct 1, 2019, 6:27 AM
[13]
Citation Linkdoi.org10.1080/14786440009463897
Oct 1, 2019, 6:27 AM
[14]
Citation Linkwww.itl.nist.gov"1.3.6.7.4. Critical Values of the Chi-Square Distribution"
Oct 1, 2019, 6:27 AM
[15]
Citation Linkwww.itl.nist.gov"Critical Values of the Chi-Squared Distribution"
Oct 1, 2019, 6:27 AM
[16]
Citation Linkwww.stat.columbia.edu"A Bayesian Formulation for Exploratory Data Analysis and Goodness-of-Fit Testing"
Oct 1, 2019, 6:27 AM
[17]
Citation Linkocw.mit.eduLecture 23
Oct 1, 2019, 6:27 AM
[18]
Citation Linkwww-biba.inrialpes.frProbability Theory: The Logic of Science
Oct 1, 2019, 6:27 AM
[19]
Citation Linkdoi.org10.1214/aoms/1177728726
Oct 1, 2019, 6:27 AM
[20]
Citation Linkdoi.org10.2307/1402731
Oct 1, 2019, 6:27 AM