Skip to main content
Log in

Conditional gradient method for multiobjective optimization

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

We analyze the conditional gradient method, also known as Frank–Wolfe method, for constrained multiobjective optimization. The constraint set is assumed to be convex and compact, and the objectives functions are assumed to be continuously differentiable. The method is considered with different strategies for obtaining the step sizes. Asymptotic convergence properties and iteration-complexity bounds with and without convexity assumptions on the objective functions are stablished. Numerical experiments are provided to illustrate the effectiveness of the method and certify the obtained theoretical results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Ansary, M.A., Panda, G.: A modified quasi-Newton method for vector optimization problem. Optimization 64(11), 2289–2306 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  2. Beck, A.: Introduction to Nonlinear Optimization. Society for Industrial and Applied Mathematics, Philadelphia, PA (2014)

    Book  MATH  Google Scholar 

  3. Beck, A.: First-Order Methods in Optimization. Society for Industrial and Applied Mathematics, Philadelphia, PA (2017)

    Book  MATH  Google Scholar 

  4. Beck, A., Teboulle, M.: A conditional gradient method with linear rate of convergence for solving convex linear systems. Math. Methods Oper. Res. 59(2), 235–247 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bello Cruz, J.Y.: A subgradient method for vector optimization problems. SIAM J. Optim. 23(4), 2169–2182 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  6. Bello Cruz, J.Y., Bouza Allende, G.: A steepest descent-like method for variable order vector optimization problems. J. Optim. Theory Appl. 162(2), 371–391 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  7. Bento, G.C., Cruz Neto, J.X., López, G., Soubeyran, A., Souza, J.C.O.: The proximal point method for locally Lipschitz functions in multiobjective optimization with application to the compromise problem. SIAM J. Optim. 28(2), 1104–1120 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  8. Birgin, E.G., Martnez, J.M.: Practical Augmented Lagrangian Methods for Constrained Optimization. Society for Industrial and Applied Mathematics, Philadelphia, PA (2014)

    Book  Google Scholar 

  9. Boyd, N., Schiebinger, G., Recht, B.: The alternating descent conditional gradient method for sparse inverse problems. SIAM J. Optim. 27(2), 616–639 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  10. Carrizo, G.A., Lotito, P.A., Maciel, M.C.: Trust region globalization strategy for the nonconvex unconstrained multiobjective optimization problem. Math. Program. 159(1–2, Ser. A), 339–369 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  11. Custdio, A.L., Madeira, J.F.A., Vaz, A.I.F., Vicente, L.N.: Direct multisearch for multiobjective optimization. SIAM J. Optim. 21(3), 1109–1140 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  12. Das, I., Dennis, J.E.: Normal-boundary intersection: a new method for generating the Pareto surface in nonlinear multicriteria optimization problems. SIAM J. Optim. 8(3), 631–657 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  13. Dennis, J.E., Schnabel, R.B.: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Society for Industrial and Applied Mathematics, Philadelphia (1996)

    Book  MATH  Google Scholar 

  14. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  15. Fliege, J., Graña Drummond, L.M., Svaiter, B.F.: Newton’s method for multiobjective optimization. SIAM J. Optim. 20(2), 602–626 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  16. Fliege, J., Svaiter, B.F.: Steepest descent methods for multicriteria optimization. Math. Methods Oper. Res. 51(3), 479–494 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  17. Fliege, J., Vaz, A.I.F.: A method for constrained multiobjective optimization based on SQP techniques. SIAM J. Optim. 26(4), 2091–2119 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  18. Fliege, J., Vaz, A.I.F., Vicente, L.N.: Complexity of gradient descent for multiobjective optimization. Optim. Method. Softw. 34(5), 949–959 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  19. Frank, M., Wolfe, P.: An algorithm for quadratic programming. Naval Res. Logist. Quart. 3, 95–110 (1956)

    Article  MathSciNet  Google Scholar 

  20. Freund, R.M., Grigas, P., Mazumder, R.: An extended Frank–Wolfe method with “in-face” directions, and its application to low-rank matrix completion. SIAM J. Optim. 27(1), 319–346 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  21. Fukuda, E.H., Graña Drummond, L.M.: On the convergence of the projected gradient method for vector optimization. Optimization 60(8–9), 1009–1021 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  22. Fukuda, E.H., Graña Drummond, L.M.: Inexact projected gradient method for vector optimization. Comput. Optim. Appl. 54(3), 473–493 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  23. Fukuda, E.H., Graña Drummond, L.M.: A survey on multiobjective descent methods. Pesq. Oper. 34, 585–620 (2014)

    Article  Google Scholar 

  24. Garber, D., Hazan, E.: Faster rates for the Frank-Wolfe method over strongly-convex sets. In: 32nd International Conference on Machine Learning, ICML 2015, pp. 1–12 (2015)

  25. Geoffrion, A.M.: Proper efficiency and the theory of vector maximization. J. Math. Anal. Appl. 22(3), 618–630 (1968)

    Article  MathSciNet  MATH  Google Scholar 

  26. Ghadimi, S.: Conditional gradient type methods for composite nonlinear and stochastic optimization. Math. Program. 173(1–2, Ser. A), 431–464 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  27. Gonçalves, M.L.N., Prudente, L.F.: On the extension of the Hager–Zhang conjugate gradient method for vector optimization. Comput. Optim. Appl. 76(3), 889–916 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  28. Graña Drummond, L.M., Iusem, A.N.: A projected gradient method for vector optimization problems. Comput. Optim. Appl. 28(1), 5–29 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  29. Graña Drummond, L.M., Svaiter, B.F.: A steepest descent method for vector optimization. J. Comput. Appl. Math. 175(2), 395–414 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  30. Grapiglia, G.N., Sachs, E.W.: On the worst-case evaluation complexity of non-monotone line search algorithms. Comput. Optim. Appl. 68(3), 555–577 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  31. Harchaoui, Z., Juditsky, A., Nemirovski, A.: Conditional gradient algorithms for norm-regularized smooth. Convex optimization. Math. Program. 152(1–2, Ser. A), 75–112 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  32. Hillermeier, C.: Generalized homotopy approach to multiobjective optimization. J. Optim. Theory Appl. 110(3), 557–583 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  33. Huband, S., Hingston, P., Barone, L., While, L.: A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans. Evol. Comput. 10(5), 477–506 (2006)

    Article  MATH  Google Scholar 

  34. Jaggi, M.: Revisiting Frank–Wolfe: projection-free sparse convex optimization. In: Proceedings of the 30th International Conference on International Conference on Machine Learning, ICML’13, vol. 28, pp I-427–I-435 (2013)

  35. Jin, Y., Olhofer, M., Sendhoff, B.: Dynamic weighted aggregation for evolutionary multi-objective optimization: why does it work and how? In: Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation, GECCO01, San Francisco, CA, USA, pp. 1042–1049. Morgan Kaufmann Publishers Inc (2001)

  36. Kim, I., de Weck, O.: Adaptive weighted-sum method for bi-objective optimization: Pareto front generation. Struct. Multidiscip. Optim. 29(2), 149–158 (2005)

    Article  Google Scholar 

  37. Konnov, I.V.: Simplified versions of the conditional gradient method. Optimization 67(12), 2275–2290 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  38. Lacoste-Julien, S., Jaggi, M.: On the global linear convergence of Frank–Wolfe optimization variants (2015) arXiv e-prints, arXiv:1511.05932

  39. Lan, G.: The complexity of large-scale convex programming under a linear optimization oracle (2013) arXiv e-prints, arXiv:1309.5550

  40. Lan, G., Zhou, Y.: Conditional gradient sliding for convex optimization. SIAM J. Optim. 26(2), 1379–1409 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  41. Laumanns, M., Thiele, L., Deb, K., Zitzler, E.: Combining convergence and diversity in evolutionary multiobjective optimization. Evol. Comput. 10(3), 263–282 (2002)

    Article  Google Scholar 

  42. Levitin, E., Polyak, B.: Constrained minimization methods. USSR Comput. Math. Math. Phys. 6(5), 1–50 (1966)

    Article  MATH  Google Scholar 

  43. Liuzzi, G., Lucidi, S., Rinaldi, F.: A derivative-free approach to constrained multiobjective nonsmooth optimization. SIAM J. Optim. 26(4), 2744–2774 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  44. Lovison, A.: Singular continuation: generating piecewise linear approximations to Pareto sets via global analysis. SIAM J. Optim. 21(2), 463–490 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  45. Lucambio Pérez, L.R., Prudente, L.F.: Nonlinear conjugate gradient methods for vector optimization. SIAM J. Optim. 28(3), 2690–2720 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  46. Lucambio Pérez, L.R., Prudente, L.F.: A Wolfe line search algorithm for vector optimization. ACM Trans. Math. Softw. 45(4), 37:1–37:23 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  47. Luss, R., Teboulle, M.: Conditional gradient algorithms for rank-one matrix approximations with a sparsity constraint. SIAM Rev. 55(1), 65–98 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  48. Miglierina, E., Molho, E., Recchioni, M.: Box-constrained multi-objective optimization: a gradient-like method without a priori scalarization. Eu. J. Oper. Res. 188(3), 662–682 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  49. Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. J. Glob. Optim. 75(1), 63–90 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  50. Montonen, O., Karmitsa, N., Mäkelä, M.M.: Multiple subgradient descent bundle method for convex nonsmooth multiobjective optimization. Optimization 67(1), 139–158 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  51. Moré, J.J., Garbow, B.S., Hillstrom, K.E.: Testing unconstrained optimization software. ACM Trans. Math. Softw. 7(1), 17–41 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  52. Morovati, V., Pourkarimi, L., Basirzadeh, H.: Barzilai and Borwein’s method for multiobjective optimization problems. Numer. Algorithms 72(3), 539–604 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  53. Polyak, B.T.: Introduction to Optimization. Translations Series in Mathematics and Engineering. Optimization Software, New York (1987)

    Google Scholar 

  54. Preuss, M., Naujoks, B., Rudolph, G.: Pareto set and EMOA behavior for simple multimodal multiobjective functions. In: Runarsson, T. P., Beyer, H.-G., Burke, E., Merelo-Guervós, J. J., Whitley, L. D., Yao, X (Eds) Parallel Problem Solving from Nature—PPSN IX, pp. 513–522. Springer, Berlin (2006)

  55. Schütze, O., Laumanns, M., Coello Coello, C.A., Dellnitz, M., Talbi, E.-G.: Convergence of stochastic search algorithms to finite size Pareto set approximations. J. Glob. Optim. 41(4), 559–577 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  56. Stadler, W., Dauer, J.: Multicriteria optimization in engineering: a tutorial and survey. Progr. Astronaut. Aero. 150, 209–209 (1993)

    Google Scholar 

  57. Tabatabaei, M., Lovison, A., Tan, M., Hartikainen, M., Miettinen, K.: ANOVA-MOP: ANOVA decomposition for multiobjective optimization. SIAM J. Optim. 28(4), 3260–3289 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  58. Thomann, J., Eichfelder, G.: A trust-region algorithm for heterogeneous multiobjective optimization. SIAM J. Optim. 29(2), 1017–1047 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  59. Toint, P. L.: Test problems for partially separable optimization and results for the routine pspmin. The University of Namur, Department of Mathematics, Belgium, technical report (1983)

  60. Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: empirical results. Evol. Comput. 8(2), 173–195 (2000)

    Article  Google Scholar 

Download references

Acknowledgements

This work was funded by FAPEG (Grants PRONEM-201710267000532, PPP03/15-201810267001725), CNPq (Grants 305158/2014-7, 08151/2016-1, 302473/2017-3, 424860/2018-0), and CAPES.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to L. F. Prudente.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Assunção, P.B., Ferreira, O.P. & Prudente, L.F. Conditional gradient method for multiobjective optimization. Comput Optim Appl 78, 741–768 (2021). https://doi.org/10.1007/s10589-020-00260-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-020-00260-5

Keywords

Navigation