Constrained Optimization and Lagrange Multiplier Methods by Dimitri P. Bertsekas

By Dimitri P. Bertsekas

This generally referenced textbook, first released in 1982 via educational Press, is the authoritative and finished therapy of a few of the main favourite limited optimization tools, together with the augmented Lagrangian/multiplier and sequential quadratic programming equipment. between its exact good points, the booklet: 1) treats generally augmented Lagrangian tools, together with an exhaustive research of the linked convergence and fee of convergence houses 2) develops comprehensively sequential quadratic programming and different Lagrangian tools three) presents a close research of differentiable and nondifferentiable special penalty tools four) provides nondifferentiable and minimax optimization equipment in response to smoothing five) includes a lot intensive study now not present in the other textbook

Show description

Read or Download Constrained Optimization and Lagrange Multiplier Methods PDF

Similar mathematics_1 books

Mathematics, Affect and Learning: Middle School Students' Beliefs and Attitudes About Mathematics Education

This publication examines the ideals, attitudes, values and feelings of scholars in Years five to eight (aged 10 to fourteen years) approximately arithmetic and arithmetic schooling. essentially, this ebook specializes in the advance of affective perspectives and responses in the direction of arithmetic and arithmetic studying. moreover, it appears scholars improve their extra detrimental perspectives of arithmetic through the heart institution years (Years five to 8), and so right here we pay attention to scholars during this serious interval.

Additional resources for Constrained Optimization and Lagrange Multiplier Methods

Sample text

Choose δλ > 0 sufficiently small so that Vg(x)V#(x)' is positive definite for all x with |x — x*| < <5l9 and let Λ > 0 and λ > 0 be upper and lower bounds to the eigenvalues of [V#(x)V0(x)'] 1/2 for x G S(X*; δ^. ). Hence, we have λ\χ - x*| < \g(x)\ < A\x - x*| VXGSÎX*;^). Now from (48), it follows easily that given any r > 0, we can find a <5r G (0, (5 J such that if xk G S(X* ; <5r), then |x fc+1 - x*| < (Ar/A)|xk - x*| < r|xfc - x*|, thereby showing (42). Combining the last two inequalities we also obtain \g(Xk+i)\

It satisfies Vf(xk)'d < 0. This will be automatically satisfied if the approximate method used is a descent method for solving the quadratic optimization problem minimize \d'Hkd + Vf(xk)'d subject to d e Rn, and the starting point d0 = 0 is used, for the descent property implies \d'Hkd + Vf(xk)'d < {d'0Hkd0 + Vf(xk)'d0 = 0, or Vf(xk)'d < — \d'Hkd < 0. As will be seen in the next section, the conjugate gradient method has this property. Conditions on the accuracy of the approximate solution d that ensure linear or superlinear rate of convergence in connection with approximate methods are given in Dembo et al.

We have the following result, the proof of which we leave as an exercise for the reader. 56 1. ) S (£-T4V/<*0>V + a) where a and b are the largest and smallest eigenvalues of M. Show also that the vector xk+1 generated by the scaled conjugate gradient method with H = M - 1 minimizes/. [Hint: Use the interlocking eigenvalues lemma of Luenberger (1973, p. ] The (k + l)-step scaled conjugate gradient method is particularly interesting when Q is of the form (77), k is small relative to n, and systems of equations involving M can be solved easily (see Bertsekas, 1974a).

Download PDF sample

Rated 4.16 of 5 – based on 37 votes