Everipedia Logo
Everipedia is now IQ.wiki - Join the IQ Brainlist and our Discord for early access to editing on the new platform and to participate in the beta testing.
Aitken's delta-squared process

Aitken's delta-squared process

In numerical analysis, Aitken's delta-squared process or Aitken Extrapolation is a series acceleration method, used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926.[1] Its early form was known to Seki Kōwa (end of 17th century) and was found for rectification of the circle, i.e. the calculation of π. It is most useful for accelerating the convergence of a sequence that is converging linearly.

Definition

Given a sequence, one associates with this sequence the new sequence

which can, with improved numerical stability, also be written as

or equivalently as

where

and

for
Obviously,is ill-defined ifcontains a zero element, or equivalently, if the sequence offirst differenceshas a repeating term.
From a theoretical point of view, if that occurs only for a finite number of indices, one could easily agree to consider the sequencerestricted to indiceswith a sufficiently large. From a practical point of view, one does in general rather consider only the first few terms of the sequence, which usually provide the needed precision. Moreover, when numerically computing the sequence, one has to take care to stop the computation whenrounding errorsin the denominator become too large, where the Δ² operation may cancel too manysignificant digits. (It would be better for numerical calculation to userather than.)

Properties

Aitken's delta-squared process is a method of acceleration of convergence, and a particular case of a nonlinear sequence transformation.

willconverge linearlytoif there exists a number μ ∈ (0, 1) such that
Aitken's method will accelerate the sequenceif
is not a linear operator, but a constant term drops out, viz:, ifis a constant. This is clear from the expression ofin terms of thefinite differenceoperator.
Although the new process does not in general converge quadratically, it can be shown that for afixed pointprocess, that is, for aniterated functionsequencefor some function, converging to a fixed point, the convergence is quadratic. In this case, the technique is known asSteffensen's method.
Empirically, the A-operation eliminates the "most important error term". One can check this by considering a sequence of the form, where: The sequencewill then go to the limit likegoes to zero.
One can also show that ifgoes to its limitat a rate strictly greater than 1,does not have a better rate of convergence. (In practice, one rarely has e.g. quadratic convergence which would mean over 30 resp. 100 correct decimal places after 5 resp. 7 iterations (starting with 1 correct digit); usually no acceleration is needed in that case.)
In practice,converges much faster to the limit thandoes, as demonstrated by the example calculations below. Usually, it is much cheaper to calculate(involving only calculation of differences, one multiplication and one division) than to calculate many more terms of the sequence. Care must be taken, however, to avoid introducing errors due to insufficient precision when calculating the differences in the numerator and denominator of the expression.

Example calculations

Example 1: The value ofcan be approximated by assuming an initial value forand iterating the following:
Starting with
nx = iterated valueAx
011.4285714
11.51.4141414
21.41666671.4142136
31.4142157--
41.4142136--

It is worth noting here that Aitken's method does not save two iteration steps; computation of the first three Ax values required the first five x values. Also, the second Ax value is decidedly inferior to the 4th x value, mostly due to the fact that Aitken's process assumes linear, rather than quadratic, convergence.

Example 2: The value ofmay be calculated as an infinite sum:
ntermx = partial sumAx
0110.79166667
1−0.333333330.666666670.78333333
20.20.866666670.78630952
3−0.142857140.723809520.78492063
40.111111110.834920630.78567821
5−9.0909091×10−20.744011540.78522034
67.6923077×10−20.820934620.78551795
7-6.6666667×10−20.75426795--
85.8823529×10−20.81309148--

In this example, Aitken's method is applied to a sublinearly converging series, accelerating convergence considerably. It is still sublinear, but much faster than the original convergence: the first Ax value, whose computation required the first three x values, is closer to the limit than the eighth x value.

Example pseudocode for Aitken extrapolation

The following is an example of using the Aitken extrapolation to help find the limit of the sequencewhen given, which we assume to be the fixed point. For instance, we could havewithwhich has the fixed pointso that(seeMethods of computing square roots).
This pseudo code also computes the Aitken approximation to. The Aitken extrapolates will be denoted byaitkenX. We must check if during the computation of the extrapolate the denominator becomes too small, which could happen if we already have a large amount of accuracy, since otherwise a large amount of error could be introduced. We denote this small number byepsilon.

See also

  • Rate of convergence

  • Limit of a sequence

  • Fixed point iteration

  • Richardson extrapolation

  • Sequence transformation

  • Shanks transformation

  • Steffensen's method

References

[1]
Citation Linkopenlibrary.orgAlexander Aitken, "On Bernoulli's numerical solution of algebraic equations", Proceedings of the Royal Society of Edinburgh (1926) 46 pp. 289–305.
Sep 29, 2019, 9:45 PM
[2]
Citation Linkapps.nrbook.comsection 5.1
Sep 29, 2019, 9:45 PM
[3]
Citation Linkapps.nrbook.comsection 5.1
Sep 29, 2019, 9:45 PM
[4]
Citation Linken.wikipedia.orgThe original version of this page is from Wikipedia, you can edit the page right here on Everipedia.Text is available under the Creative Commons Attribution-ShareAlike License.Additional terms may apply.See everipedia.org/everipedia-termsfor further details.Images/media credited individually (click the icon for details).
Sep 29, 2019, 9:45 PM