Skip to main content

Abstract

Chapter 6 showed that there is no universal modeling tool; the situation is similar for optimization tools. The research on mathematical programming and computerized optimization tools (which are often called optimization solvers) is now over 50 years old, since it started even before modern computers were invented. This research resulted in a wealth of results covering diverse areas. To present all these results in detail would require more than just a chapter. Thus, we assume that the reader already knows basic concepts and has some theory of mathematical programming (e.g., Luenberger, 1984; Fletcher, 1987; Williams, 1993). We will reintroduce some basic notation and results only in order to comment on specific features of optimization solvers.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Sourer, R., and Gregory, J., 1997, Linear Programming FAQ, http://www.mcs.anl.gov/home/otc/faq/linear-programming-faq.html, Usenet sci.answers, anonymous FTP /pub/usenet/sci.answers/linear-programming-faq from rtfm.mit.edu.

    Google Scholar 

  2. Fourer, R. and Gregory, J., 1997, Nonlinear Programming FAQ, http://www.mcs.anl.gov/home/otc/faq/nonlinear-programming-faq.html, Usenet sci.answers, anonymous FTP/pub/usenet/sci.answers/nonlinear-programming-faq from rtfm.mit.edu.

    Google Scholar 

  3. In Chapters 4 and 8, and also in other parts of this book, we use a maxmin aggregation when defining an achievement scalarizing function; in the minmax aggregation we simply change the signs of functions. However, we should warn the reader that the maxmin aggregation might result in weakly efficient solutions, unacceptable in practice. To avoid them, it is sufficient, for example, to add the sum of these objectives with a small coefficient to the minimum of these functions (see Chapters 4 and 8). Even good optimization systems, such as Optimization Toolkit in MATLAB (Part-Enander et al, 1996), treat multi-objective optimization without the attention it deserves and instruct the user simply to apply minmax (or maxmin) aggregation without warning that this results in weakly efficient solutions. One can get even worse advice from some Web pages on optimization, where the suggested treatment of a multi-objective optimization problem is to convert it into a single-objective one by summing up the objectives with positive weighting coefficients, which, as shown in other chapters of this book, is possibly the worst method of treating multi-objective problems and leads the user to serious difficulties.

    Google Scholar 

  4. For example, in the GAMS modeling system, equation (7.3) can be represented while using special aliased sets in assignments (see Brooke et al, 1992); there are no tricks required to represent equation (7.3) in the AMPL programming language (see Fourer et al, 1993).

    Google Scholar 

  5. In 1980, L. Khachian from the Computing Laboratory in Moscow, Russia, used a nonlinear optimization algorithm of N. Shor from Kiev, Ukraine, to prove that linear programming problems can be solved in a number of iterations that grow polynomially with the dimensions of the problem. This created a renewed interest in large-scale linear programming. In 1984, N. Karmarkar from Bell Laboratories in the USA proposed another, more practical, nonlinear programming approach to linear programming. Further research led to the so-called barrier and interior point methods. It is worthy of note, however, that the idea for using nonlinear optimization algorithms for linear programming problems originated earlier than with Khachian. In 1977, J. Sosnowski (Warsaw, Poland) had already used a special nonlinear optimization algorithm for linear programming problems; this algorithm is included in the HYBRID solver described briefly in Section 7.2.5 of this chapter. Moreover, this algorithm used earlier ideas of O.L. Mangasarian and R.T. Rockafellar (Seattle and Madison, USA, respectively); in 1976 Rockafellar had already proven finite convergence of an augmented Lagrangian multiplier method when applied to piece-wise linear programming problems (which could also be used to obtain a polynomial estimate of the number of iterations needed).

    Google Scholar 

  6. The basic idea for this algorithm was presented in 1977 (Sosnowski, 1981); hence, it is probably one of the oldest nonlinear programming algorithms applied to linear programming. A theoreti-cal basis of this algorithm using an augmented Lagrangian approach, including the proof of finite convergence of a variant of it applied to piece-wise linear programming problems, is attributed to Rockafellar (1976) and to earlier ideas of Mangasarian (summarized later in Mangasarian, 1981). The basic idea of regularization of optimization problems as used in this algorithm is attributed even earlier to Polyak and Tretiyakov (1972), who in turn exploited the general, earlier concept of Tik-chonov regularization. The basic idea of an augmented Lagrangian multiplier iteration also relies on earlier work on an iteratively shifted penalty function (see Powell, 1969, for equality constraints and Wierzbicki, 1971, for the case of inequality constraints).

    Google Scholar 

  7. If second-order sufficient conditions for optimality are satisfied at x, refer to Rockafellar (1976).

    Google Scholar 

  8. www.isr.umd.edu/Labs/CACSE/FSQP/fsqp.html.

  9. The objective function is called sometimes the goal function. However, this term is also associ-ated with goal programming. Therefore we prefer to use the term objective function in this book.

    Google Scholar 

  10. Recall that such a continuum — a nonlinear manifold of nonunique Solutions - is characteristic for the illustrative example in Chapter 6.

    Google Scholar 

  11. The authors would like to advise users of dual Solutions to read the following: Jansen et al. (1993) and Güler et al (1993). The former provide a good summary of the related problems and their experiences with application; the latter present a survey of degeneracy in the interior point methods, which are consideredby many users to be free of degeneracy problems.

    Google Scholar 

Download references

Authors

Editor information

Andrzej P. Wierzbicki Marek Makowski Jaap Wessels

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Granat, J., Makowski, M., Wierzbicki, A.P. (2000). Optimization Tools. In: Wierzbicki, A.P., Makowski, M., Wessels, J. (eds) Model-Based Decision Support Methodology with Environmental Applications. The International Institute for Applied Systems Analysis, vol 9. Springer, Dordrecht. https://doi.org/10.1007/978-94-015-9552-0_8

Download citation

  • DOI: https://doi.org/10.1007/978-94-015-9552-0_8

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-90-481-5464-7

  • Online ISBN: 978-94-015-9552-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics