Skip to main content
Log in

An implicit filtering algorithm for derivative-free multiobjective optimization with box constraints

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

This paper is concerned with the definition of new derivative-free methods for box constrained multiobjective optimization. The method that we propose is a non-trivial extension of the well-known implicit filtering algorithm to the multiobjective case. Global convergence results are stated under smooth assumptions on the objective functions. We also show how the proposed method can be used as a tool to enhance the performance of the Direct MultiSearch (DMS) algorithm. Numerical results on a set of test problems show the efficiency of the implicit filtering algorithm when used to find a single Pareto solution of the problem. Furthermore, we also show through numerical experience that the proposed algorithm improves the performance of DMS alone when used to reconstruct the entire Pareto front.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Bailey, K.R., Fitzpatrick, B.G.: Estimation of groundwater flow parameters using least squares. Math. Comput. Model. 26(11), 117–127 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  2. Carter, R.G., Gablonsky, J.M., Patrick, A., Kelley, C.T., Eslinger, O.J.: Algorithms for noisy problems in gas transmission pipeline optimization. Optim. Eng. 2(2), 139–157 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  3. Conn, A.R., Scheinberg, K., Vicente, L.N.: Introduction to Derivative-Free Optimization. Society for Industrial and Applied Mathematics, Philadelphia (2009)

    Book  MATH  Google Scholar 

  4. Custódio, A.L., Madeira, J.F.A., Vaz, A.I.F., Vicente, L.N.: Direct multisearch for multiobjective optimization. SIAM J. Optim. 21(3), 1109–1140 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  5. David, J., Ives, R.L., Tran, H.T., Bui, T., Read, M.E.: Computer optimized design of electron guns. IEEE Trans. Plasma Sci. 36(1), 156–168 (2008)

    Article  Google Scholar 

  6. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  7. dos Santos Coelho, L., Mariani, V.C.: Combining of differential evolution and implicit filtering algorithm applied to electromagnetic design optimization. In: Saad, A., Dahal, K., Sarfraz, M., Roy, R. (eds.) Soft Computing in Industrial Applications, pp. 233–240. Springer, Berlin, Heidelberg (2007)

  8. Fliege, J., Svaiter, B.F.: Steepest descent methods for multicriteria optimization. Math. Methods Oper. Res. 51(3), 479–494 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  9. Fowler, K.R., Kelley, C.T., Kees, C.E., Miller, C.T.: A hydraulic capture application for optimal remediation design. Dev. Water Sci. 55, 1149–1157 (2004)

    Article  Google Scholar 

  10. Fowler, K.R., Kelley, C.T., Miller, C.T., Kees, C.E., Darwin, R.W., Reese, J.P., Farthing, M.W., Reed, M.S.C.: Solution of a well-field design problem with implicit filtering. Optim. Eng. 5(2), 207–234 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  11. Gen, M., Cheng, R., Lin, L.: Multiobjective genetic algorithms. In: Network Models and Optimization: Multiobjective Genetic Algorithm Approach, pp. 1–47. Springer, London (2008)

  12. Gilmore, P., Kelley, C.T.: An implicit filtering algorithm for optimization of functions with many local minima. SIAM J. Optim. 5(2), 269–285 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  13. Gilmore, P., Kelley, C.T., Miller, C.T., Williams, G.A.: Implicit filtering and optimal design problems. In: Borggaard, J., Burkardt, J., Gunzburger, M., Peterson, J. (eds.) Optimal Design and Control, pp. 159–176. Springer, Boston, MA (1995)

  14. Kelley, C.T.: Implicit Filtering. Society for Industrial and Applied Mathematics, Philadelphia (2011)

    Book  MATH  Google Scholar 

  15. Lin, C.-J., Lucidi, S., Palagi, L., Risi, A., Sciandrone, M.: Decomposition algorithm model for singly linearly constrained problems subject to lower and upper bounds. J. Optim. Theory Appl. 141, 107–126 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  16. Liuzzi, G., Lucidi, S., Rinaldi, F.: A derivative-free approach to constrained multiobjective nonsmooth optimization. SIAM J. Optim. 26(4), 2744–2774 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  17. Lucidi, S., Palagi, L., Risi, A., Sciandrone, M.: A convergent decomposition algorithm for support vector machines. Comput. Optim. Appl. 38, 217–234 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  18. Miettinen, K.: Nonlinear Multiobjective Optimization. International Series in Operations Research and Management Science. Springer, Berlin (1998)

    Book  Google Scholar 

  19. Van Veldhuizen, D.A.: Multiobjective evolutionary algorithms: classifications, analyses, and new innovations. Ph.D. thesis, Wright Patterson AFB, OH, USA, AAI9928483 (1999)

  20. Zhou, A., Qu, B.-Y., Li, H., Zhao, S.-Z., Suganthan, P.N., Zhang, Q.: Multiobjective evolutionary algorithms: a survey of the state of the art. Swarm Evol. Comput. 1(1), 32–49 (2011)

    Article  Google Scholar 

Download references

Acknowledgements

We are thankful to three anonymous reviewers whose stimulating comments and suggestions greatly helped us improving the paper. Also, we would like to thank Prof. Ana Luísa Custódio, José F. Aguilar Madeira, A. Ismael F. Vaz, and Luís Nunes Vicente for providing us the matlab code of their direct multisearch algorithm (DMS). Work partially supported by INDAM-GNCS.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to G. Liuzzi.

Appendix: Technical results

Appendix: Technical results

In the appendix we prove two technical results that are used for the convergence analysis.

Proposition 2

Let \(f:\mathbb {R}^n \rightarrow \mathbb {R}\) be continuously differentiable and let \(x\in \mathcal {F}\). Let \(\{z_k\}\subset \mathcal {F}\) and \(\{h_k\}\subset \mathbb {R}^+\) be sequences such that

$$\begin{aligned} \lim _{k\rightarrow \infty }z_k=x\quad \quad \lim _{k\rightarrow \infty }h_k=0. \end{aligned}$$
(28)

Assume that, for \(i=1,\ldots ,n\), at least one of the following condition holds

$$\begin{aligned}&z_k+h_ke_i\in \mathcal {F},\\&z_k-h_ke_i\in \mathcal {F}. \end{aligned}$$

Then we have

$$\begin{aligned} \lim _{k\rightarrow \infty }\nabla _{h_k}f(z_k)=\nabla f(x). \end{aligned}$$

Proof

Let \(i\in \{1,\ldots ,n\}\) and define the following subsets

$$\begin{aligned} K_1= & {} \left\{ k:z_k+h_ke_i\in \mathcal {F}, \ z_k-h_ke_i\notin \mathcal {F}\right\} ,\\ K_2= & {} \left\{ k: z_k\pm h_ke_i\in \mathcal {F}\right\} ,\\ K_3= & {} \left\{ k:z_k-h_ke_i\in \mathcal {F}, \ z_k+h_ke_i\notin \mathcal {F}\right\} . \end{aligned}$$

By definition of approximated gradient we have

$$\begin{aligned} \frac{\partial _h f(z_k)}{\partial x_i}= \left\{ \begin{array}{ll} \frac{f(z_k+h_ke_i)-f(z_k)}{h_k}&{} k\in K_1\\ \frac{f(z_k+h_ke_i)-f(z_k-h_ke_i)}{2h_k}&{}k\in K_2\\ \frac{f(z_k)-f(z_k-h_ke_i)}{h_k}&{}k\in K_3 \end{array} \right. \end{aligned}$$

Suppose that \(K_1\) is an infinite subset. For all \(k\in K_1\), by the Mean Value Theorem, we can write

$$\begin{aligned} \frac{\partial _h f(z_k)}{\partial x_i}=\frac{\partial f(\xi _k)}{\partial x_i}, \end{aligned}$$

where \(\xi _k=z_k+\theta _k h_ke_i\), with \(\theta _k\in (0,1)\). Taking the limits for \(k\in K_1\) and \(k\rightarrow \infty \), recalling (28) and the continuity of the gradient, we obtain

$$\begin{aligned} \lim _{k\in K_1,k\rightarrow \infty } \frac{\partial _h f(z_k)}{\partial x_i}=\frac{\partial f(x)}{\partial x_i}. \end{aligned}$$

By repeating the same reasonings using the sets \(K_2\) and \(K_3\), we have

$$\begin{aligned} \lim _{k\rightarrow \infty } \frac{\partial _h f(z_k)}{\partial x_i}=\frac{\partial f (x)}{\partial x_i}, \end{aligned}$$

and the thesis is proved. \(\square \)

Proposition 3

Consider Problem (1), let \(F:\mathbb {R}^n \rightarrow \mathbb {R}^m\) be continuously differentiable, \(x\in \mathcal {F}\), and let \(\theta :\mathcal {F}\times R^+ \rightarrow \mathbb {R}\) be defined as in (9). Then:

  1. (i)

    \(\theta (x,h)\le 0\) for all \(x \in \mathcal {F}\) and \(h>0\);

  2. (ii)

    let \(\{z_k\}\subset \mathcal {F}\) and \(\{h_k\}\subset \mathbb {R}^+\) be sequences satisfying the assumptions of Proposition 2; we have

    $$\begin{aligned} \lim _{k\rightarrow \infty }\theta (z_k,h_k)=\theta (x). \end{aligned}$$

Proof

(i) Given \(x,y\in \mathcal {F}\) and \(h>0\), we consider the function g defined as follows:

$$\begin{aligned} g(y,h,x) = \max _{i=1,\ldots ,m}\nabla _h f_{i}(x)^\top (y-x), \end{aligned}$$

and note that

$$\begin{aligned} \theta (x,h) = \min _{y \in \mathcal {F}} g(y,h,x). \end{aligned}$$

Then \(\theta (x,h)\le 0\) follows easily from \(g(x,h,x) =0\).

(ii) We preliminary observe that

$$\begin{aligned} | \max _{i} a_i - \max _{ i} b_i |\le \Vert a-b\Vert , \quad \mathrm{for ~any~} \quad a,b\in \mathbb {R}^m. \end{aligned}$$

Let us define

$$\begin{aligned}&y(x)\in \arg \min _{y\in \mathcal {F}} ~ \max _{i=1,\ldots ,m} \nabla f_i(x)^\top (y-x),\\&y_k\in \arg \min _{y\in \mathcal {F}} ~ \max _{i=1,\ldots ,m} \nabla _{h_k} f_i(z_k)^\top (y-z_k), \end{aligned}$$

so that

$$\begin{aligned}&\max _{i=1,\ldots ,m} \nabla f_i(x)^\top (y(x)-x)\le \max _{i=1,\ldots ,m} \nabla f_i(x)^\top (y_k-x)\\&\max _{i=1,\ldots ,m} \nabla _{h_k} f_i(z_k)^\top (y_k-z_k)\le \max _{i=1,\ldots ,m} \nabla _{h_k} f_i(z_k)^\top (y(x)-z_k). \end{aligned}$$

Denote by \(J_{h_k}(z_k)\) the approximated Jacobian \(J_{h_k}(z_k)=[ \nabla _{h_k}f_1(z_k),\ldots , \nabla _{h_k} f_m(z_k) ]^\top \). We can write

$$\begin{aligned} \theta (z_k,h_k)-\theta (x)= & {} \max _{i} \nabla _{h_k} f_i(z_k)^\top (y_k-z_k)- \max _{i} \nabla f_i(x)^\top (y(x)-x)\\\le & {} \max _{i} \nabla _{h_k} f_i(z_k)^\top (y(x)-z_k)- \max _{i} \nabla f_i(x)^\top (y(x)-x)\\\le & {} \Vert J_{h_k}(z_k)^\top (y(x)-z_k) - J(x)^\top (y(x)-x)\Vert \\\le & {} \Vert (J_{h_k}(z_k) - J(x))^\top y(x)\Vert \\&+ \Vert J(x)^\top x - J_{h_k}(z_k)^\top z_k + J(x)^\top z_k - J(x)^\top z_k\Vert \\\le & {} \Vert (J_{h_k}(z_k) - J(x))^\top y(x)\Vert + \Vert J(x)^\top (z_k-x)\Vert \\&+ \, \Vert (J_{h_k}(z_k)-J(x))^\top z_k \Vert . \end{aligned}$$

A quite similar bound, with \(y_k\) in place of y(x), can be obtained for \(\theta (x)-\theta (z_k,h_k)\). Then, as \(z_k\) and \(y_k\) belong to the compact set \(\mathcal {F}\), by Proposition 2, \(|\theta (z_k,h_k)-\theta (x)|\rightarrow 0\) for \(k\rightarrow \infty \). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cocchi, G., Liuzzi, G., Papini, A. et al. An implicit filtering algorithm for derivative-free multiobjective optimization with box constraints. Comput Optim Appl 69, 267–296 (2018). https://doi.org/10.1007/s10589-017-9953-2

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-017-9953-2

Keywords

Mathematics Subject Classification

Navigation