Everipedia Logo
Everipedia is now IQ.wiki - Join the IQ Brainlist and our Discord for early access to editing on the new platform and to participate in the beta testing.
Broyden–Fletcher–Goldfarb–Shanno algorithm

Broyden–Fletcher–Goldfarb–Shanno algorithm

In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems.[1]

The BFGS method belongs to quasi-Newton methods, a class of hill-climbing optimization techniques that seek a stationary point of a (preferably twice continuously differentiable) function. For such problems, a necessary condition for optimality is that the gradient be zero. Newton's method and the BFGS methods are not guaranteed to converge unless the function has a quadratic Taylor expansion near an optimum. However, BFGS can have acceptable performance even for non-smooth optimization instances.[2]

In Quasi-Newton methods, the Hessian matrix of second derivatives is not computed. Instead, the Hessian matrix is approximated using updates specified by gradient evaluations (or approximate gradient evaluations). Quasi-Newton methods are generalizations of the secant method to find the root of the first derivative for multidimensional problems. In multi-dimensional problems, the secant equation does not specify a unique solution, and quasi-Newton methods differ in how they constrain the solution. The BFGS method is one of the most popular members of this class.[3] Also in common use is L-BFGS, which is a limited-memory version of BFGS that is particularly suited to problems with very large numbers of variables (e.g., >1000). The BFGS-B variant handles simple box constraints.[4]

The algorithm is named after Charles George Broyden, Roger Fletcher, Donald Goldfarb and David Shanno.[5][6][7][8]

Rationale

The optimization problem is to minimize, whereis a vector in, andis a differentiable scalar function. There are no constraints on the values thatcan take.
The algorithm begins at an initial estimate for the optimal valueand proceeds iteratively to get a better estimate at each stage.

The search direction pk at stage k is given by the solution of the analogue of the Newton equation:

whereis an approximation to theHessian matrix, which is updated iteratively at each stage, andis the gradient of the function evaluated at xk. Aline searchin the direction pkis then used to find the next point xk+1by minimizingover the scalar
The quasi-Newton condition imposed on the update ofis
Letand, thensatisfies, which is the secant equation. The curvature conditionshould be satisfied forto be positive definite, which can be verified by pre-multiplying the secant equation with. If the function is not strongly convex, then the condition has to be enforced explicitly.
Instead of requiring the full Hessian matrix at the pointto be computed as, the approximate Hessian at stage k is updated by the addition of two matrices:
Bothandare symmetric rank-one matrices, but their sum is a rank-two update matrix. BFGS andDFPupdating matrix both differ from its predecessor by a rank-two matrix. Another simpler rank-one method is known assymmetric rank-onemethod, which does not guarantee thepositive definiteness. In order to maintain the symmetry and positive definiteness of, the update form can be chosen as. Imposing the secant condition,. Choosingand, we can obtain:[9]
Finally, we substituteandintoand get the update equation of:

Algorithm

From an initial guessand an approximate Hessian matrixthe following steps are repeated asconverges to the solution:
  1. Obtain a direction by solving .

  2. Perform a one-dimensional optimization (line search) to find an acceptable stepsize in the direction found in the first step, so .

  3. Set and update .

  4. .

  5. .

denotes the objective function to be minimized. Convergence can be checked by observing the norm of the gradient,. Ifis initialized with, the first step will be equivalent to agradient descent, but further steps are more and more refined by, the approximation to the Hessian.
The first step of the algorithm is carried out using the inverse of the matrix, which can be obtained efficiently by applying theSherman–Morrison formulato the step 5 of the algorithm, giving
This can be computed efficiently without temporary matrices, recognizing thatis symmetric, and thatandare scalars, using an expansion such as

In statistical estimation problems (such as maximum likelihood or Bayesian inference), credible intervals or confidence intervals for the solution can be estimated from the inverse of the final Hessian matrix. However, these quantities are technically defined by the true Hessian matrix, and the BFGS approximation may not converge to the true Hessian matrix.[10]

Notable implementations

  • The large scale nonlinear optimization software Artelys Knitro implements, among others, both BFGS and L-BFGS algorithms.

  • The GSL implements BFGS as gsl_multimin_fdfminimizer_vector_bfgs2.[11]

  • In the MATLAB Optimization Toolbox, the fminunc function[12] uses BFGS with cubic line search when the problem size is set to "medium scale."[13]

  • In R, the BFGS algorithm (and the L-BFGS-B version that allows box constraints) is implemented as an option of the base function optim().[14]

  • In SciPy, the scipy.optimize.fmin_bfgs function implements BFGS.[15] It is also possible to run BFGS using any of the L-BFGS algorithms by setting the parameter L to a very large number.

See also

  • BHHH algorithm

  • Davidon–Fletcher–Powell formula

  • Gradient descent

  • L-BFGS

  • Levenberg–Marquardt algorithm

  • Nelder–Mead method

  • Pattern search (optimization)

  • Quasi-Newton methods

  • Symmetric rank-one

References

[1]
Citation Linkopenlibrary.orgFletcher, Roger (1987), Practical methods of optimization (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-91547-8
Sep 19, 2019, 2:30 AM
[2]
Citation Link//doi.org/10.1007%2Fs12532-015-0086-2Curtis, Frank E.; Que, Xiaocun (2015), "A Quasi-Newton Algorithm for Nonconvex, Nonsmooth Optimization with Global Convergence Guarantees", Mathematical Programming Computation, 7 (4): 399–428, doi:10.1007/s12532-015-0086-2
Sep 19, 2019, 2:30 AM
[3]
Citation Linkopenlibrary.orgNocedal, Jorge; Wright, Stephen J. (2006), Numerical Optimization (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-30303-1, page 24
Sep 19, 2019, 2:30 AM
[4]
Citation Link//doi.org/10.1137%2F0916069Byrd, Richard H.; Lu, Peihuang; Nocedal, Jorge; Zhu, Ciyou (1995), "A Limited Memory Algorithm for Bound Constrained Optimization", SIAM Journal on Scientific Computing, 16 (5): 1190–1208, CiteSeerX 10.1.1.645.5814, doi:10.1137/0916069
Sep 19, 2019, 2:30 AM
[5]
Citation Link//doi.org/10.1093%2Fimamat%2F6.1.76Broyden, C. G. (1970), "The convergence of a class of double-rank minimization algorithms", Journal of the Institute of Mathematics and Its Applications, 6: 76–90, doi:10.1093/imamat/6.1.76
Sep 19, 2019, 2:30 AM
[6]
Citation Link//doi.org/10.1093%2Fcomjnl%2F13.3.317Fletcher, R. (1970), "A New Approach to Variable Metric Algorithms", Computer Journal, 13 (3): 317–322, doi:10.1093/comjnl/13.3.317
Sep 19, 2019, 2:30 AM
[7]
Citation Link//doi.org/10.1090%2FS0025-5718-1970-0258249-6Goldfarb, D. (1970), "A Family of Variable Metric Updates Derived by Variational Means", Mathematics of Computation, 24 (109): 23–26, doi:10.1090/S0025-5718-1970-0258249-6
Sep 19, 2019, 2:30 AM
[8]
Citation Link//www.ams.org/mathscinet-getitem?mr=0274029Shanno, David F. (July 1970), "Conditioning of quasi-Newton methods for function minimization", Mathematics of Computation, 24 (111): 647–656, doi:10.1090/S0025-5718-1970-0274029-X, MR 0274029
Sep 19, 2019, 2:30 AM
[9]
Citation Linkopenlibrary.orgFletcher, Roger (1987), Practical methods of optimization (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-91547-8
Sep 19, 2019, 2:30 AM
[10]
Citation Linkopenlibrary.orgRen-pu, GE; Powell, M.J.D (1982), The Convergence of Variable Metric Matrices in Unconstrained Optimization (27 ed.), North-Holland: Mathematical Programming
Sep 19, 2019, 2:30 AM
[11]
Citation Linkwww.gnu.orghttps://www.gnu.org/software/gsl/manual/html_node/Multimin-Algorithms-with-Derivatives.html
Sep 19, 2019, 2:30 AM
[12]
Citation Linkwww.mathworks.comhttp://www.mathworks.com/help/toolbox/optim/ug/fminunc.html
Sep 19, 2019, 2:30 AM
[13]
Citation Linkweb.archive.orghttps://web.archive.org/web/20101028140551/http://www.mathworks.com/help/toolbox/optim/ug/brnoxr7-1.html#brnpcye
Sep 19, 2019, 2:30 AM
[14]
Citation Linkstat.ethz.chhttps://stat.ethz.ch/R-manual/R-devel/library/stats/html/optim.html
Sep 19, 2019, 2:30 AM
[15]
Citation Linkdocs.scipy.orghttp://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin_bfgs.html#scipy.optimize.fmin_bfgs
Sep 19, 2019, 2:30 AM
[16]
Citation Link//www.ams.org/mathscinet-getitem?mr=24237262423726
Sep 19, 2019, 2:30 AM
[17]
Citation Linkdoi.org10.1007/s12532-015-0086-2
Sep 19, 2019, 2:30 AM
[18]
Citation Linkwww.ece.northwestern.edu"A Limited Memory Algorithm for Bound Constrained Optimization"
Sep 19, 2019, 2:30 AM
[19]
Citation Linkciteseerx.ist.psu.edu10.1.1.645.5814
Sep 19, 2019, 2:30 AM
[20]
Citation Linkdoi.org10.1137/0916069
Sep 19, 2019, 2:30 AM