Abstract
In this chapter we initiate the presentation, analysis, and comparison of algorithms designed to solve constrained minimization problems. The four chapters that consider such problems roughly correspond to the following classification scheme. Consider a constrained minimization problem having n variables and m constraints. Methods can be devised for solving this problem that work in spaces of dimension n − m, n, m, or n + m. Each of the following chapters corresponds to methods in one of these spaces. Thus, the methods in the different chapters represent quite different approaches and are founded on different aspects of the theory. However, there are also strong interconnections between the methods of the various chapters, both in the final form of implementation and in their performance. Indeed, there soon emerges the theme that the rates of convergence of most practical algorithms are determined by the structure of the Hessian of the Lagrangian much like the structure of the Hessian of the objective function determines the rates of convergence for a wide assortment of methods for unconstrained problems. Thus, although the various algorithms of these chapters differ substantially in their motivation, they are ultimately found to be governed by a common set of principles.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
† Actually a more standard procedure is to define the pseudoinverse \(\overline{\mathbf{L}}_{k}^{\dag }\), and then \(\mathbf{z} = \overline{\mathbf{L}}_{k}^{\dag }\mathbf{y}_{k}\).
- 2.
∗ The exact solution is obviously symmetric about the center of the chain, and hence the problem could be reduced to having ten links and only one constraint. However, this symmetry disappears if the first constraint value is specified as nonzero. Therefore for generality we solve the full chain problem.
Bibliography
J. Abadie, J. Carpentier, Generalization of the Wolfe reduced gradient method to the case of nonlinear constraints, in Optimization, ed. by R. Fletcher (Academic, London, 1969), pp. 37–47
M. Frank, P. Wolfe, An algorithm for quadratic programming. Naval Res. Logist. Q. 3, 95–110 (1956)
P.E. Gill, W. Murray, M.H. Wright, Practical Optimization (Academic, London, 1981)
D.G. Luenberger, The gradient projection method along geodesics. Manag. Sci. 18(11), 620–631 (1972)
J. Rosen, The gradient projection method for nonlinear programming, I. Linear contraints. J. Soc. Ind. Appl. Math. 8, 181–217 (1960)
J. Rosen, The gradient projection method for nonlinear programming, II. Non-linear constraints. J. Soc. Ind. Appl. Math. 9, 514–532 (1961)
D.M. Topkis, A.F. Veinott Jr., On the convergence of some feasible direction algorithms for nonlinear programming. J. SIAM Control 5(2), 268–279 (1967)
P. Wolfe, On the convergence of gradient methods under constraints. IBM Research Report RZ 204, Zurich (1966)
P. Wolfe, Methods of nonlinear programming (Chap. 6), in Nonlinear Programming, ed. by J. Abadie. Interscience (Wiley, New York, 1967), pp. 97–131
W.I. Zangwill, Nonlinear Programming: A Unified Approach (Prentice-Hall, Englewood Cliffs, NJ, 1969)
G. Zoutendijk, Methods of Feasible Directions (Elsevier, Amsterdam, 1960)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Luenberger, D.G., Ye, Y. (2016). Primal Methods. In: Linear and Nonlinear Programming. International Series in Operations Research & Management Science, vol 228. Springer, Cham. https://doi.org/10.1007/978-3-319-18842-3_12
Download citation
DOI: https://doi.org/10.1007/978-3-319-18842-3_12
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-18841-6
Online ISBN: 978-3-319-18842-3
eBook Packages: Business and ManagementBusiness and Management (R0)