Skip to main content
Log in

Convergence of the Nonmonotone Perry and Shanno Method for Optimization

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

In this paper a new nonmonotone conjugate gradient method is introduced, which can be regarded as a generalization of the Perry and Shanno memoryless quasi-Newton method. For convex objective functions, the proposed nonmonotone conjugate gradient method is proved to be globally convergent. Its global convergence for non-convex objective functions has also been studied. Numerical experiments indicate that it is able to efficiently solve large scale optmization problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. E.M.L. Beale, “On an iterative method of finding a local minimum of a function of more than one variable,” Technical Report No. 25, Statistical Techniques Research Group, Princeton University, N.J., 1958.

    Google Scholar 

  2. I. Bongartz, A.R. Conn, N. Gould, and Ph.L. Toint, “CUTE: constrained and unconstrained testing environment,” Research Report, IBM T.J. Watson Research Center, Yorktown Heights, NY, 1993.

    Google Scholar 

  3. K.M. Brown and J.E. Dennis Jr., “Newcomputational algorithms for minimizing a sum of squares of nonlinear functions,” Report No. 71-6, Department of Computer Science, Yale University, New Haven, Connecticut, U.S.A., March 1971.

    Google Scholar 

  4. A. Buckley, “A combined conjugate gradient quasi-newton minimization algorithm,” Mathematical Programming, vol. 15, pp. 200–210, 1978.

    Google Scholar 

  5. A. Buckley and A. LeNir, “QN-like variable storage conjugate gradients,” Mathematical Programming, vol. 27, pp. 155–175, 1983.

    Google Scholar 

  6. R.H. Byrd and J. Nocedal, “A tool for the analysis of quasi-newton methods with application to unconstrained minimization,” SIAM J. Numer. Anal., vol. 26, pp. 727–739, 1989.

    Google Scholar 

  7. R.H. Byrd, J. Nocedal, and R.B. Schnabel, “Representations of quasi-newton matrices and their use in limited memory methods,” Mathematical Programming, vol. 63, pp. 129–156, 1994.

    Google Scholar 

  8. R.H. Byrd, J. Nocedal, and Y. Yuan, “Global convergence of a class of quasi-newton methods on convex problems,” SIAM J. Numer. Anal., vol. 24, pp. 1171–1189, 1987.

    Google Scholar 

  9. P.E. Gill and W. Murray, “The numerical solution of a problem in the calculus of variations,” in Recent Mathematical Developments in Control, D.J. Bell (Ed.), Academic Press: New York, 1973, pp. 97–122.

    Google Scholar 

  10. P.E. Gill and W. Murray, “Conjugate gradients for large-scale nonlinear optimization,” Technical Report SOL n79-15, Department of Operations Research, Stanford University, Stanford, CA, 1979.

    Google Scholar 

  11. L. Grippo, F. Lampariello, and S. Lucidi, “A nonmonotone linesearch technique for newton's methods,” SIAM J. Numer. Anal., vol. 23, pp. 707–716, 1986.

    Google Scholar 

  12. J.Y. Han and G.H. Liu, “General form of stepsize selection rule of linesearch and relevant analysis of global convergence of BFGS algorithm,” Acta Mathematicae Applicatae Sinica, vol. 8, no. 1, pp. 112–122, 1992.

    Google Scholar 

  13. D.C. Liu and J. Nocedal, “Test results of two limited memory methods for large scale optimization,” Technical Report NAM 04, Department of Electrical Engineering and Computer Science, Northwestern University, Evanston, IL, 1988.

    Google Scholar 

  14. D.C. Liu and J. Nocedal, “On the limited memory BFGS method for large scale optimization,” Mathematical Programming, vol. 45, pp. 503–528, 1989.

    Google Scholar 

  15. G.H. Liu, J.Y. Han, and D.F. Sun, “Global convergence of the BFGS algorithm with nonmonotone linesearch,” Optimization, vol. 34, pp. 147–159, 1995.

    Google Scholar 

  16. G.H. Liu, L.L. Jing, L.X. Han, and D. Han, “A class of nonmonotone conjugate gradient methods for unconstrained optimization,” Journal of Optimization Theory and Applications, vol. 101, no. 1, 1999.

  17. S. Lucidi and M. Roma, “Nonmonotone conjugate gradient methods for optimization,” in System Modelling and Optimization, J. Henry and J.D. Yvon (Eds.), Sringer Verlag, 1995. Lecture Notes on Control and Information Sciences.

  18. J.J. More, B.S. Garbow, and K.E. Hillstrom, “Testing unconstrained optimization software,” ACM Transactions on Mathematical Software, vol. 7, no. 1, pp. 17–41, 1980.

    Google Scholar 

  19. L. Nazareth, “A relationship between the BFGS and conjugate gradient algorithms and its implications for new algorithms,” SIAM Journal on Numerical Analysis, vol. 16, pp. 794–800, 1979.

    Google Scholar 

  20. J. Nocedal, “Updating quasi-newton matrices with limited storage,” Mathematics of Computation, vol. 35, pp. 773–782, 1980.

    Google Scholar 

  21. M.J.D. Powell, “Some global convergence properties of a variable metric algorithm for minimization without exact linesearches,” in Nonlinear Programming, SIAM-AMS Proceedings, Vol. IX., R.W. Cottle and C.E. Lemke (Eds.), American Mathematical Society, Providence, RI, 1976.

    Google Scholar 

  22. J.M. Perry, “A class of conjugate gradient algorithms with a two step variable metric memory,” Discussion Paper 269, Center for Mathematical Studies in Economicas and Management Science, Northwestern University, Evanston, IL, 1977.

    Google Scholar 

  23. D.F. Shanno, “On the convergence of a new conjugate gradient algorithm,” SIAM Journal on Numerical Analysis, vol. 15, pp. 1247–1257, 1978.

    Google Scholar 

  24. D.F. Shanno, “Conjugate gradient methods with inexact searches,” Mathematics of Operations Research, vol. 3, pp. 244–256, 1978.

    Google Scholar 

  25. F.F. Sisser, “A modified Newton's method for minimizing factorable functions,” Manuscript, Queens College of The City University of New York, Flushing, N.Y., U.S.A., 1980.

    Google Scholar 

  26. Ph.L. Toint, “Test problems for partially separable optimization and results for the routine PSPMIN,” Technical Report Rpt. 83/4, Facultes University de Namur, Department of Mathematics, B-5000, Namur, Belgium, 1983.

    Google Scholar 

  27. J. Werner, “Global convergence of quasi-newton methods with practical linesearches,” Technical Report, NAM-Bericht Nr.67, Marz, 1989.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Liu, G., Jing, L. Convergence of the Nonmonotone Perry and Shanno Method for Optimization. Computational Optimization and Applications 16, 159–172 (2000). https://doi.org/10.1023/A:1008753308646

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1008753308646

Navigation