Skip to main content
Log in

Acceleration method for convex optimization over the fixed point set of a nonexpansive mapping

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

The existing algorithms for solving the convex minimization problem over the fixed point set of a nonexpansive mapping on a Hilbert space are based on algorithmic methods, such as the steepest descent method and conjugate gradient methods, for finding a minimizer of the objective function over the whole space, and attach importance to minimizing the objective function as quickly as possible. Meanwhile, it is of practical importance to devise algorithms which converge in the fixed point set quickly because the fixed point set is the set with the constraint conditions that must be satisfied in the problem. This paper proposes an algorithm which not only minimizes the objective function quickly but also converges in the fixed point set much faster than the existing algorithms and proves that the algorithm with diminishing step-size sequences strongly converges to the solution to the convex minimization problem. We also analyze the proposed algorithm with each of the Fletcher–Reeves, Polak–Ribiére–Polyak, Hestenes–Stiefel, and Dai–Yuan formulas used in the conventional conjugate gradient methods, and show that there is an inconvenient possibility that their algorithms may not converge to the solution to the convex minimization problem. We numerically compare the proposed algorithm with the existing algorithms and show its effectiveness and fast convergence.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. \((d_n^f)_{n\in \mathbb {N}}\) is referred to as a descent search direction if \(\langle d_n^f, {\nabla }\! f (x_n) \rangle < 0\) for all \(n\in \mathbb {N}\).

  2. These are defined as follows: \(\delta _n^{\mathrm {FR}}:=\Vert {\nabla }\! f (x_{n+1})\Vert ^2 /\Vert {\nabla }\! f (x_n)\Vert ^2, \delta _n^{\mathrm {PRP}}:= v_n/\Vert {\nabla }\! f (x_n)\Vert ^2, \delta _n^{\mathrm {HS}}:= v_n / u_n, \delta _n^{\mathrm {DY}}:=\Vert \nabla f (x_{n+1})\Vert ^2 /u_n\), where \(u_n := \langle d_n^f, {\nabla }\! f (x_{n+1}) - {\nabla }\! f (x_n) \rangle \) and \(v_n:= \langle {\nabla }\! f (x_{n+1}), {\nabla }\! f (x_{n+1}) - {\nabla }\! f (x_n) \rangle \).

  3. For example, when there is a bound on \(\mathrm {Fix}(N)\), we can choose \(K\) as a closed ball with a large radius containing \(\mathrm {Fix}(N)\). The metric projection onto such a \(K\) is easily computed (see also Sect. 2.1). See the final paragraph in Sect. 3.1 for a discussion of Problem 3.1 when a bound on \(\mathrm {Fix}(N)\) either does not exist or is not known.

  4. The conjugate gradient method with the DY formula (i.e., \(\delta _n^{(1)}:= \delta _n^{\mathrm {DY}}\)) generates the descent search direction under the Wolfe conditions [29]. Whether or not the conjugate gradient methods generate descent search directions depends on the choices of \(\delta _n^{(1)}\) and \(\alpha _n\).

  5. Reference [10], Sect. 2.1] showed that \(x_{n+1}:= x_n + \alpha _n d_n^{f}\) and \(d_{n+1}^{f}:= - {\nabla }\! f (x_{n+1}) + \delta _n^{(1)} d_n^{f} - \delta _n^{(2)} z_n\), where \(\alpha _n, \delta _n^{(1)} (>0)\) are arbitrary, \(z_n (\in \mathbb {R}^N)\) is any vector, and \(\delta _n^{(2)}:= \delta _n^{(1)} (\langle \nabla f(x_{n+1}), d_n \rangle / \langle {\nabla }\! f(x_{n+1}), z_n \rangle )\), satisfy \(\langle d_n^f, {\nabla }\! f(x_n) \rangle = -\Vert {\nabla }\! f(x_n)\Vert ^2 (n\in \mathbb {N})\).

  6. We can choose, for example, \(w_n:= N(y_n) - y_n\) and \(z_n:= {\nabla }\! f(x_{n+1}) (n\in \mathbb {N})\) by referring to [12] and [15], Sect. 3]. Lemma 3.1 ensures that they are bounded.

  7. Given a halfspace \(S:= \{ x\in H :\langle a,x\rangle \le b \}\), where \(a (\ne 0) \in H\) and \(b\in \mathbb {R}, N (x):= P_{S} (x) = x - [\max \{ 0, \langle a,x \rangle -b \} /\Vert a\Vert ^2] a (x\in H)\) is nonexpansive with \(\mathrm {Fix}(N) = \mathrm {Fix}(P_{S}) = S \ne \emptyset \) [18, p. 406], [17], Chap. 28.3]. However, we cannot define a bounded \(K\) satisfying \(\mathrm {Fix}(N) = S \subset K\).

  8. Suppose that \((x_n)_{n\in \mathbb {N}} (\subset H)\) weakly converges to \(\hat{x} \in H\) and \(\bar{x} \ne \hat{x}\). Then, the following condition, called Opial’s condition [30], is satisfied: \(\liminf _{n\rightarrow \infty }\Vert x_n - \hat{x}\Vert < \liminf _{n\rightarrow \infty }\Vert x_n - \bar{x}\Vert \). In the above situation, Opial’s condition leads to \(\liminf _{i \rightarrow \infty }\Vert x_{n_i} - x^*\Vert < \liminf _{i \rightarrow \infty }\Vert x_{n_i} - \hat{N} (x^*)\Vert \).

  9. We randomly chose \(\lambda _Q^k \!\in \! (1, S) (k\!=\!2,3,\ldots , S\!-\!1)\) and set \(\hat{Q} \!\in \! \mathbb {R}^{S \times S}\) as a diagonal matrix with eigenvalues, \(\lambda _Q^1, \lambda _Q^2, \ldots , \lambda _Q^S\). We made a positive definite matrix, \(Q \!\in \! \mathbb {R}^{S \times S}\), using an orthogonal matrix and \(\hat{Q}\).

  10. \(x\in \mathbb {R}^S\) satisfies \(\Vert x - N(x)\Vert = 0\) if and only if \(x\in \mathrm {Fix}(N)\).

  11. See Remark 3.2 on the nonmonotonicity of \((\Vert x_n - N(x_n)\Vert )_{n\in \mathbb {N}}\) in Algorithm 3.1.

References

  1. Yamada, I.: The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. In: Butnariu, D., Censor, Y., Reich, S. (eds.) Inherently Parallel Algorithms for Feasibility and Optimization and Their Applications, pp. 473–504. Elsevier, Amsterdam (2001)

    Chapter  Google Scholar 

  2. Combettes, P.L.: A block-iterative surrogate constraint splitting method for quadratic signal recovery. IEEE Trans. Signal Process. 51, 1771–1782 (2003)

    Google Scholar 

  3. Slavakis, K., Yamada, I.: Robust wideband beamforming by the hybrid steepest descent method. IEEE Trans. Signal Process. 55, 4511–4522 (2007)

    Google Scholar 

  4. Iiduka, H.: Iterative algorithm for triple-hierarchical constrained nonconvex optimization problem and its application to network bandwidth allocation. SIAM J. Optim. 22, 862–878 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  5. Iiduka, H., Uchida, M.: Fixed point optimization algorithms for network bandwidth allocation problems with compoundable constraints. IEEE Commun. Lett. 15, 596–598 (2011)

    Google Scholar 

  6. Combettes, P.L., Bondon, P.: Hard-constrained inconsistent signal feasibility problems. IEEE Trans. Signal Process. 47, 2460–2468 (1999)

    Article  MATH  Google Scholar 

  7. Yamada, I., Ogura, N., Shirakawa, N.: A numerical robust hybrid steepest descent method for the convexly constrained generalized inverse problems. Contemp. Math. 313, 269–305 (2002)

    Article  MathSciNet  Google Scholar 

  8. Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, Springer Series in Operations Research and Financial Engineering, Berlin (1999)

  9. Cheng, W.: A two-term PRP-based descent method. Numer. Funct. Anal. Optim. 28, 1217–1230 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  10. Narushima, Y., Yabe, H., Ford, J.A.: A three-term conjugate gradient method with sufficient descent property for unconstrained optimization. SIAM J. Optim. 21, 212–230 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  11. Zhang, L., Zhou, W., Li, D.H.: A descent modified Polak–Ribiére–Polyak conjugate gradient method and its global convergence. IMA J. Numer. Anal. 26, 629–640 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  12. Zhang, L., Zhou, W., Li, D.H.: Global convergence of a modified Fletcher–Reeves conjugate gradient method with Armijo-type line search. Numer. Math. 104, 561–572 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  13. Zhang, L., Zhou, W., Li, D.H.: Some descent three-term conjugate gradient methods and their global convergence. Optim. Methods Softw. 22, 697–711 (2007)

    Article  MathSciNet  Google Scholar 

  14. Iiduka, H., Yamada, I.: A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM J. Optim. 19, 1881–1893 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  15. Iiduka, H.: Three-term conjugate gradient method for the convex optimization problem over the fixed point set of a nonexpansive mapping. Appl. Math. Comput. 217, 6315–6327 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  16. Iiduka, H.: Iterative algorithm for solving triple-hierarchical constrained optimization problem. J. Optim. Theory Appl. 148, 580–592 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  17. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, Berlin (2011)

    Book  MATH  Google Scholar 

  18. Bauschke, H.H., Borwein, J.M.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367–426 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  19. Goebel, K., Kirk, W.A.: Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge Studies in Advanced Mathematics, Cambridge (1990)

  20. Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York (1984)

    MATH  Google Scholar 

  21. Takahashi, W.: Nonlinear Functional Analysis. Yokohama Publishers, Japan (2000)

    MATH  Google Scholar 

  22. Stark, H., Yang, Y.: Vector Space Projections: A Numerical Approach to Signal and Image Processing. Wiley, London (1998)

    MATH  Google Scholar 

  23. Wolfe, P.: Finding the nearest point in a polytope. Math. Program. 11, 128–149 (1976)

    Article  MATH  MathSciNet  Google Scholar 

  24. Aoyama, K., Kimura, Y., Takahashi, W., Toyoda, M.: On a strongly nonexpansive sequence in Hilbert spaces. J. Nonlinear Convex Anal. 8, 471–489 (2007)

    MATH  MathSciNet  Google Scholar 

  25. Ekeland, I., Témam, R.: Convex Analysis and Variational Problems. Classics Appl. Math., vol. 28. SIAM, Philadelphia (1999)

    Book  Google Scholar 

  26. Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications. Classics Appl. Math., vol. 31. SIAM, Philadelphia (2000)

    Book  Google Scholar 

  27. Borwein, J.M., Lewis, A.S.: Convex Analysis and Nonlinear Optimization: Theory and Examples. Springer, Berlin (2000)

    Book  Google Scholar 

  28. Zeidler, E.: Nonlinear Functional Analysis ans Its Applications III. Variational Methods and Optimization. Springer, Berlin (1985)

    Book  Google Scholar 

  29. Dai, Y.H., Yuan, Y.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10, 177–182 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  30. Opial, Z.: Weak convergence of the sequence of successive approximation for nonexpansive mappings. Bull. Am. Math. Soc. 73, 591–597 (1967)

    Article  MATH  MathSciNet  Google Scholar 

  31. Bakushinsky, A., Goncharsky, A.: Ill-Posed Problems: Theory and Applications. Kluwer, Dordrecht (1994)

    Book  Google Scholar 

  32. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  33. Nesterov, Y.E.: A method for solving the convex programming problem with convergence rate \(o(1/k^2)\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983)

    MathSciNet  Google Scholar 

  34. Iiduka, H.: Fixed point optimization algorithms for distributed optimization in networked systems. SIAM J. Optim. 23, 1–26 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  35. Iiduka, H.: Fixed point optimization algorithm and its application to power control in CDMA data networks. Math. Program. 133, 227–242 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  36. Iiduka, H., Yamada, I.: Computational method for solving a stochastic linear-quadratic control problem given an unsolvable stochastic algebraic Riccati equation. SIAM J. Control Optim. 50, 2173–2192 (2012)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgments

I wrote Sect. 3.2 by referring to the referee’s report on the original manuscript of [14]. I am sincerely grateful to the anonymous referee that reviewed the original manuscript of [14] for helping me compile the paper. I also would like to thank the Co-Editor, Michael C. Ferris, and the two anonymous reviewers for helping me improve the original manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hideaki Iiduka.

Additional information

This work was supported by the Japan Society for the Promotion of Science through a Grant-in-Aid for Young Scientists (B) (23760077), and in part by the Japan Society for the Promotion of Science through a Grant-in-Aid for Scientific Research (C) (22540175).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Iiduka, H. Acceleration method for convex optimization over the fixed point set of a nonexpansive mapping. Math. Program. 149, 131–165 (2015). https://doi.org/10.1007/s10107-013-0741-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-013-0741-1

Keywords

Mathematics Subject Classification (2010)

Navigation