ON THE COMPLEXITY OF STEEPEST DESCENT, NEWTON'S AND REGULARIZED NEWTON'S METHODS FOR NONCONVEX UNCONSTRAINED OPTIMIZATION PROBLEMS

C. Cartis, N. I. M. Gould, Ph. L. Toint

Research output: Contribution to journalArticlepeer-review

Abstract

It is shown that the steepest-descent and Newton's methods for unconstrained nonconvex optimization under standard assumptions may both require a number of iterations and function evaluations arbitrarily close to O(epsilon(-2)) to drive the norm of the gradient below epsilon. This shows that the upper bound of O(c(-2)) evaluations known for the steepest descent is tight and that Newton's method may be as slow as the steepest-descent method in the worst case. The improved evaluation complexity bound of O(epsilon(-3/2)) evaluations known for cubically regularized Newton's methods is also shown to be tight.

Original languageEnglish
Pages (from-to)2833-2852
Number of pages20
JournalSiam journal on optimization
Volume20
Issue number6
DOIs
Publication statusPublished - 2010

Keywords

  • nonlinear optimization
  • unconstrained optimization
  • steepest-descent method
  • Newton's method
  • trust-region methods
  • cubic regularization
  • global complexity bounds
  • global rate of convergence
  • CUBIC REGULARIZATION

Fingerprint

Dive into the research topics of 'ON THE COMPLEXITY OF STEEPEST DESCENT, NEWTON'S AND REGULARIZED NEWTON'S METHODS FOR NONCONVEX UNCONSTRAINED OPTIMIZATION PROBLEMS'. Together they form a unique fingerprint.

Cite this