Skip to main content
Log in

Convergence conditions for restarted conjugate gradient methods with inaccurate line searches

  • Published:
Mathematical Programming Submit manuscript

Abstract

Convergence properties of restarted conjugate gradient methods are investigated for the case where the usual requirement that an exact line search be performed at each iteration is relaxed.

The objective function is assumed to have continuous second derivatives and the eigenvalues of the Hessian are assumed to be bounded above and below by positive constants. It is further assumed that a Lipschitz condition on the second derivatives is satisfied at the location of the minimum.

A class of descent methods is described which exhibitn-step quadratic convergence when restarted even though errors are permitted in the line search. It is then shown that two conjugate gradient methods belong to this class.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. A.I. Cohen, “Rate of convergence of several conjugate gradient algorithms”,SIAM Journal on Numerical Analysis 9 (1972) 248–259.

    Article  MathSciNet  MATH  Google Scholar 

  2. H. Crowder and P. Wolfe, “Linear convergence of the conjugate gradient method”, IBM Research Rept. RC 3330 (1971).

  3. L.C.W. Dixon, “Conjugate gradient algorithms: quadratic termination without linear searches”, Tech. Rept. No. 38, Numerical Optimization Center, The Hatfield Polytechnic (1972).

  4. R. Fletcher and C.M. Reeves, “Function minimization by conjugate gradients”,The Computer Journal 7 (1964) 149–154.

    Article  MathSciNet  MATH  Google Scholar 

  5. M. Hestenes and E. Stiefel, “Method of conjugate gradients for solving linear systems”,Journal of Research of the National Bureau of Standards 49 (1952) 409–436.

    Article  MathSciNet  MATH  Google Scholar 

  6. K. Kawamura and R.A. Volz, “On the rate of convergence of the conjugate gradient reset method with inaccurate linear minimizations”,IEEE Transactions on Automatic Control AC18 (1973) 360–366.

    Article  MathSciNet  MATH  Google Scholar 

  7. R. Klessig and E. Polak, “Efficient implementations of the Polak—Ribiere conjugate gradient algorithm”,SIAM Journal on Control 10 (1972) 524–549.

    Article  MathSciNet  MATH  Google Scholar 

  8. M.L. Lenard, “Practical convergence conditions for unconstrained optimization”, Dissertation, Columbia University (1971).

  9. M.L. Lenard, “Practical convergence conditions for unconstrained optimization”,Mathematical Programming 4 (1973) 309–323.

    Article  MathSciNet  MATH  Google Scholar 

  10. G.P. McCormick and K. Ritter, “Methods of conjugate directions versus quasi-Newton methods”,Mathematical Programming 3 (1972) 101–116.

    Article  MathSciNet  MATH  Google Scholar 

  11. G.P. McCormick and K. Ritter, “Alternative proofs of the convergence properties of the conjugate-gradient method”,Journal of Optimization Theory and Applications 13(1974) 497–518.

    Article  MathSciNet  MATH  Google Scholar 

  12. E. Polak and G. Ribiere, “Note sur la convergence de methodes de directions conjugées”,Revue Francaise d'Automatique, Informatique et Recherche Opérationelle 3, Serie R (1969) 35–43.

    MATH  Google Scholar 

  13. B.T. Polyak, “The method of conjugate gradient in extremum problems”,U.S.S.R. Computational Mathematics and Mathematical Physics (English translation) 9 (1969) 94–112.

    Article  MathSciNet  MATH  Google Scholar 

  14. H.W. Sorenson, “Comparison of some conjugate direction procedures for function minimization”,Journal of the Franklin Institue 288 (1969) 421–441.

    Article  MathSciNet  MATH  Google Scholar 

  15. P. Wolfe, “Convergence conditions for ascent methods”,SIAM Review 11 (1969) 226–235.

    Article  MathSciNet  MATH  Google Scholar 

  16. P. Wolfe, “Convergence theory in nonlinear programming”, in: J. Abadie, ed.,Integer and nonlinear programming (North-Holland, Amsterdam, 1970) pp. 1–36.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

Sponsored by the United States Army under Contract No. DA-31-124-ARO-D-462.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lenard, M.L. Convergence conditions for restarted conjugate gradient methods with inaccurate line searches. Mathematical Programming 10, 32–51 (1976). https://doi.org/10.1007/BF01580652

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01580652

Keywords

Navigation