Everipedia Logo
Everipedia is now IQ.wiki - Join the IQ Brainlist and our Discord for early access to editing on the new platform and to participate in the beta testing.
Linear least squares (mathematics)

Linear least squares (mathematics)

Linear least squares (LLS) is the least squares approximation of linear functions to data. It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and generalized (correlated) residuals. Numerical methods for linear least squares include inverting the matrix of the normal equations and orthogonal decomposition methods.

Main formulations

The three main linear least squares formulations are:

Alternative formulations

Other formulations include:

In addition, percentage least squares focuses on reducing percentage errors, which is useful in the field of forecasting or time series analysis. It is also useful in situations where the dependent variable has a wide range without constant variance, as here the larger residuals at the upper end of the range would dominate if OLS were used. When the percentage or relative error is normally distributed, least squares percentage regression provides maximum likelihood estimates. Percentage regression is linked to a multiplicative error model, whereas OLS is linked to models containing an additive error term.[6]

In constrained least squares, one is interested in solving a linear least squares problem with an additional constraint on the solution.

Objective function

In OLS (i.e., assuming unweighted observations), the optimal value of the objective function is found by substituting in the optimal expression for the coefficient vector, can be written as:

where, the latter equality holding sinceis symmetric and idempotent. It can be shown from this[7] that under an appropriate assignment of weights theexpected valueof S is m − n. If instead unit weights are assumed, the expected value of S is, whereis the variance of each observation.
If it is assumed that the residuals belong to a normal distribution, the objective function, being a sum of weighted squared residuals, will belong to achi-squared () distributionwith m − ndegrees of freedom. Some illustrative percentile values ofare given in the following table.[8]

These values can be used for a statistical criterion as to the goodness of fit. When unit weights are used, the numbers should be divided by the variance of an observation.

For WLS, the ordinary objective function above is replaced for a weighted average of residuals.

Discussion

In statistics and mathematics, linear least squares is an approach to fitting a mathematical or statistical model to data in cases where the idealized value provided by the model for any data point is expressed linearly in terms of the unknown parameters of the model. The resulting fitted model can be used to summarize the data, to predict unobserved values from the same system, and to understand the mechanisms that may underlie the system.

Mathematically, linear least squares is the problem of approximately solving an overdetermined system of linear equations, where the best approximation is defined as that which minimizes the sum of squared differences between the data values and their corresponding modeled values. The approach is called linear least squares since the assumed function is linear in the parameters to be estimated. Linear least squares problems are convex and have a closed-form solution that is unique, provided that the number of data points used for fitting equals or exceeds the number of unknown parameters, except in special degenerate situations. In contrast, non-linear least squares problems generally must be solved by an iterative procedure, and the problems can be non-convex with multiple optima for the objective function. If prior distributions are available, then even an underdetermined system can be solved using the Bayesian MMSE estimator.

In statistics, linear least squares problems correspond to a particularly important type of statistical model called linear regression which arises as a particular form of regression analysis. One basic form of such a model is an ordinary least squares model. The present article concentrates on the mathematical aspects of linear least squares problems, with discussion of the formulation and interpretation of statistical regression models and statistical inferences related to these being dealt with in the articles just mentioned. See outline of regression analysis for an outline of the topic.

Properties

If the experimental errors,, are uncorrelated, have a mean of zero and a constant variance,, theGauss–Markov theoremstates that the least-squares estimator,, has the minimum variance of all estimators that are linear combinations of the observations. In this sense it is the best, or optimal, estimator of the parameters. Note particularly that this property is independent of the statisticaldistribution functionof the errors. In other words, *the distribution function of the errors need not be anormal distribution*. However, for some probability distributions, there is no guarantee that the least-squares solution is even possible given the observations; still, in such cases it is the best estimator that is both linear and unbiased.

For example, it is easy to show that the arithmetic mean of a set of measurements of a quantity is the least-squares estimator of the value of that quantity. If the conditions of the Gauss–Markov theorem apply, the arithmetic mean is optimal, whatever the distribution of errors of the measurements might be.

However, in the case that the experimental errors do belong to a normal distribution, the least-squares estimator is also a maximum likelihood estimator.[9]

These properties underpin the use of the method of least squares for all types of data fitting, even when the assumptions are not strictly valid.

Limitations

An assumption underlying the treatment given above is that the independent variable, x, is free of error. In practice, the errors on the measurements of the independent variable are usually much smaller than the errors on the dependent variable and can therefore be ignored. When this is not the case, total least squares or more generally errors-in-variables models, or rigorous least squares, should be used. This can be done by adjusting the weighting scheme to take into account errors on both the dependent and independent variables and then following the standard procedure.[10][11]

In some cases the (weighted) normal equations matrix XTX isill-conditioned. When fitting polynomials the normal equations matrix is aVandermonde matrix. Vandermonde matrices become increasingly ill-conditioned as the order of the matrix increases. In these cases, the least squares estimate amplifies the measurement noise and may be grossly inaccurate. Variousregularizationtechniques can be applied in such cases, the most common of which is calledridge regression. If further information about the parameters is known, for example, a range of possible values of, then various techniques can be used to increase the stability of the solution. For example, seeconstrained least squares.
Another drawback of the least squares estimator is the fact that the norm of the residuals,is minimized, whereas in some cases one is truly interested in obtaining small error in the parameter, e.g., a small value of. However, since the true parameteris necessarily unknown, this quantity cannot be directly minimized. If aprior probabilityonis known, then aBayes estimatorcan be used to minimize themean squared error,. The least squares method is often applied when no prior is known. Surprisingly, when several parameters are being estimated jointly, better estimators can be constructed, an effect known asStein's phenomenon. For example, if the measurement error isGaussian, several estimators are known whichdominate, or outperform, the least squares technique; the best known of these is theJames–Stein estimator. This is an example of more generalshrinkage estimatorsthat have been applied to regression problems.

Applications

  • Polynomial fitting: models are polynomials in an independent variable, x: Straight line: .[12] Quadratic: . Cubic, quartic and higher polynomials. For regression with high-order polynomials, the use of orthogonal polynomials is recommended.[13]

  • Numerical smoothing and differentiation — this is an application of polynomial fitting.

  • Multinomials in more than one independent variable, including surface fitting

  • Curve fitting with B-splines [10]

  • Chemometrics, Calibration curve, Standard addition, Gran plot, analysis of mixtures

Uses in data fitting

The primary application of linear least squares is indata fitting. Given a set of m data pointsconsisting of experimentally measured values taken at m valuesof an independent variable (may be scalar or vector quantities), and given a model functionwithit is desired to find the parameterssuch that the model function "best" fits the data. In linear least squares, linearity is meant to be with respect to parametersso
Here, the functionsmay be nonlinear with respect to the variable x.

Ideally, the model function fits the data exactly, so

for allThis is usually not possible in practice, as there are more data points than there are parameters to be determined. The approach chosen then is to find the minimal possible value of the sum of squares of theresiduals

so to minimize the function

After substituting forand then for, this minimization problem becomes the quadratic minimization problem above with

and the best fit can be found by solving the normal equations.

Example

As a result of an experiment, fourdata points were obtained,and(shown in red in the diagram on the right). We hope to find a linethat best fits these four points. In other words, we would like to find the numbersandthat approximately solve the overdetermined linear system

of four equations in two unknowns in some "best" sense.

The residual, at each point, between the curve fit and the data is the difference between the right- and left-hand sides of the equations above. The least squares approach to solving this problem is to try to make the sum of the squares of these residuals as small as possible; that is, to find the minimum of the function

The minimum is determined by calculating thepartial derivativesofwith respect toandand setting them to zero

This results in a system of two equations in two unknowns, called the normal equations, which when solved give

and the equationof the line of best fit. Theresiduals, that is, the differences between thevalues from the observations and thepredicated variables by using the line of best fit, are then found to beand(see the diagram on the right). The minimum value of the sum of squares of the residuals is
More generally, one can haveregressors, and a linear model

Using a quadratic model

Importantly, in "linear least squares", we are not restricted to using a line as the model as in the above example. For instance, we could have chosen the restricted quadratic model. This model is still linear in theparameter, so we can still perform the same analysis, constructing a system of equations from the data points:

The partial derivatives with respect to the parameters (this time there is only one) are again computed and set to 0:

and solved

leading to the resulting best fit model

See also

  • Line-line intersection#Nearest point to non-intersecting lines, an application

  • Line fitting

  • Nonlinear least squares

  • Regularized least squares

  • Simple linear regression

  • Partial least squares regression

References

[1]
Citation Link//www.ncbi.nlm.nih.gov/pubmed/16592540Lai, T.L.; Robbins, H.; Wei, C.Z. (1978). "Strong consistency of least squares estimates in multiple regression". PNAS. 75 (7): 3034–3036. Bibcode:1978PNAS...75.3034L. doi:10.1073/pnas.75.7.3034. JSTOR 68164. PMC 392707. PMID 16592540.
Sep 29, 2019, 12:05 AM
[2]
Citation Link//www.jstor.org/stable/2245853del Pino, Guido (1989). "The Unifying Role of Iterative Generalized Least Squares in Statistical Algorithms". Statistical Science. 4 (4): 394–403. doi:10.1214/ss/1177012408. JSTOR 2245853.
Sep 29, 2019, 12:05 AM
[3]
Citation Link//www.jstor.org/stable/2240725Carroll, Raymond J. (1982). "Adapting for Heteroscedasticity in Linear Models". The Annals of Statistics. 10 (4): 1224–1233. doi:10.1214/aos/1176345987. JSTOR 2240725.
Sep 29, 2019, 12:05 AM
[4]
Citation Link//www.jstor.org/stable/2986237Cohen, Michael; Dalal, Siddhartha R.; Tukey, John W. (1993). "Robust, Smoothly Heterogeneous Variance Regression". Journal of the Royal Statistical Society, Series C. 42 (2): 339–353. JSTOR 2986237.
Sep 29, 2019, 12:05 AM
[5]
Citation Link//www.jstor.org/stable/2132463Nievergelt, Yves (1994). "Total Least Squares: State-of-the-Art Regression in Numerical Analysis". SIAM Review. 36 (2): 258–264. doi:10.1137/1036055. JSTOR 2132463.
Sep 29, 2019, 12:05 AM
[6]
Citation Link//ssrn.com/abstract=1406472Tofallis, C (2009). "Least Squares Percentage Regression". Journal of Modern Applied Statistical Methods. 7: 526–534. doi:10.2139/ssrn.1406472. SSRN 1406472.
Sep 29, 2019, 12:05 AM
[7]
Citation Linkopenlibrary.orgHamilton, W. C. (1964). Statistics in Physical Science. New York: Ronald Press.
Sep 29, 2019, 12:05 AM
[8]
Citation Linkopenlibrary.orgSpiegel, Murray R. (1975). Schaum's outline of theory and problems of probability and statistics. New York: McGraw-Hill. ISBN 978-0-585-26739-5.
Sep 29, 2019, 12:05 AM
[9]
Citation Linkopenlibrary.orgMargenau, Henry; Murphy, George Moseley (1956). The Mathematics of Physics and Chemistry. Princeton: Van Nostrand.
Sep 29, 2019, 12:05 AM
[10]
Citation Linkopenlibrary.orgGans, Peter (1992). Data fitting in the Chemical Sciences. New York: Wiley. ISBN 978-0-471-93412-7.
Sep 29, 2019, 12:05 AM
[11]
Citation Linkopenlibrary.orgDeming, W. E. (1943). Statistical adjustment of Data. New York: Wiley.
Sep 29, 2019, 12:05 AM
[12]
Citation Linkopenlibrary.orgActon, F. S. (1959). Analysis of Straight-Line Data. New York: Wiley.
Sep 29, 2019, 12:05 AM
[13]
Citation Linkopenlibrary.orgGuest, P. G. (1961). Numerical Methods of Curve Fitting. Cambridge: Cambridge University Press.
Sep 29, 2019, 12:05 AM
[14]
Citation Linkmathworld.wolfram.comLeast Squares Fitting – From MathWorld
Sep 29, 2019, 12:05 AM
[15]
Citation Linkmathworld.wolfram.comLeast Squares Fitting-Polynomial – From MathWorld
Sep 29, 2019, 12:05 AM
[16]
Citation Linkwww.ncbi.nlm.nih.gov"Strong consistency of least squares estimates in multiple regression"
Sep 29, 2019, 12:05 AM
[17]
Citation Linkui.adsabs.harvard.edu1978PNAS...75.3034L
Sep 29, 2019, 12:05 AM
[18]
Citation Linkdoi.org10.1073/pnas.75.7.3034
Sep 29, 2019, 12:05 AM
[19]
Citation Linkwww.jstor.org68164
Sep 29, 2019, 12:05 AM
[20]
Citation Linkwww.ncbi.nlm.nih.gov392707
Sep 29, 2019, 12:05 AM