Abstract
We consider a primal-dual algorithm for minimizing \(f({\mathbf {x}})+h\square l({\mathbf {A}}{\mathbf {x}})\) with Fréchet differentiable f and l∗. This primal-dual algorithm has two names in literature: Primal-Dual Fixed-Point algorithm based on the Proximity Operator (PDFP2O) and Proximal Alternating Predictor-Corrector (PAPC). In this paper, we prove its convergence under a weaker condition on the stepsizes than existing ones. With additional assumptions, we show its linear convergence. In addition, we show that this condition (the upper bound of the stepsize) is tight and can not be weakened. This result also recovers a recently proposed positive-indefinite linearized augmented Lagrangian method. In addition, we apply this result to a decentralized consensus algorithm PG-EXTRA and derive the weakest convergence condition.
Similar content being viewed by others
References
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York (2011)
Boţ, R.I., Csetnek, E.R., Heinrich, A., Hendrich, C.: On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems. Math. Program. 150(2), 251–279 (2015)
Bot, R.I., Hendrich, C.: A douglas–Rachford type primal-dual method for solving inclusions with mixtures of composite and parallel-sum type monotone operators. SIAM J. Optim. 23(4), 2541–2565 (2013)
Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)
Chambolle, A., Pock, T.: An introduction to continuous optimization for imaging. Acta Numer. 25, 161–319 (2016)
Chambolle, A., Pock, T.: On the ergodic convergence rates of a first-order primal–dual algorithm. Math. Program. 159(1-2), 253–287 (2016)
Chen, P., Huang, J., Zhang, X.: A primal–dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Probl. 29(2), 025011 (2013)
Chen, P., Huang, J., Zhang, X.: A primal-dual fixed point algorithm for minimization of the sum of three convex separable functions. Fixed Point Theory and Applications 2016(1), 54 (2016)
Combettes, P.L., Pesquet, J.C.: Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators. Set-Valued Var Anal 20(2), 307–330 (2012)
Condat, L.: A primal–dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. J. Optim. Theory Appl. 158(2), 460–479 (2013)
Davis, D., Yin, W.: A three-operator splitting scheme and its optimization applications. Set-Valued Var Anal 25(4), 829–858 (2017)
Drori, Y., Sabach, S., Teboulle, M.: A simple algorithm for a class of nonsmooth convex–concave saddle-point problems. Oper. Res. Lett. 43 (2), 209–214 (2015)
Hamedani, E.Y., Aybat, N.S.: A primal-dual algorithm for general convex-concave saddle point problems. (2018)
Hamedani, E.Y., Jalilzadeh, A., Aybat, N., Shanbhag, U.: Iteration complexity of randomized primal-dual methods for convex-concave saddle point problems. arXiv:1806.04118 (2018)
He, B., Ma, F., Yuan, X.: Optimal proximal augmented Lagrangian method and its application to full Jacobian splitting for multi-block separable convex minimization problems. IMA J. Numer. Anal. 40(2), 1188–1216 (2020)
Hien, L.T.K., Zhao, R., Haskell, W.B.: An inexact primal-dual smoothing framework for large-scale non-bilinear saddle point problems. arXiv:1711.03669 (2017)
Jaggi, M., Smith, V., Takác, M., Terhorst, J., Krishnan, S., Hofmann, T., Jordan, M.I.: Communication-Efficient Distributed Dual Coordinate Ascent. In: Advances in Neural Information Processing Systems, pp. 3068–3076 (2014)
Ko, S., Yu, D., Won, J.H.: Easily parallelizable and distributable class of algorithms for structured sparsity, with optimal acceleration. J. Comput. Graph. Stat. 28(4), 821–833 (2019)
Komodakis, N., Pesquet, J.C.: Playing with duality: an overview of recent primal-dual approaches for solving large-scale optimization problems. IEEE Signal Process. Mag. 32(6), 31–54 (2015)
Latafat, P., Patrinos, P.: Asymmetric forward–backward–adjoint splitting for solving monotone inclusions involving three operators. Comput. Optim. Appl. 68(1), 57–93 (2017)
Li, Y., Yan, M.: On linear convergence of two decentralized algorithms. J. Optim. Theory Appl. (2019)
Li, Z., Shi, W., Yan, M.: A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates. IEEE Trans. Signal Process. 67(17), 4494–4506 (2019)
Loris, I., Verhoeven, C.: On a generalization of the iterative soft-thresholding algorithm for the case of non-separable penalty. Inverse Probl. 27(12), 125007 (2011)
Petryshyn, W.: Construction of fixed points of demicompact mappings in Hilbert space. J. Math. Anal. Appl. 14(2), 276–284 (1966)
Ryu, E.K., Boyd, S.: Primer on monotone operator methods. Appl Comput Math 15(1), 3–43 (2016)
Shi, W., Ling, Q., Wu, G., Yin, W.: A proximal gradient algorithm for decentralized composite optimization. IEEE Trans. Signal Process. 63 (22), 6013–6023 (2015)
Vũ, B. C.: A splitting algorithm for dual monotone inclusions involving cocoercive operators. Adv. Comput. Math. 38(3), 667–681 (2013)
Wu, T., Yuan, K., Ling, Q., Yin, W., Sayed, A.H.: Decentralized consensus optimization with asynchrony and delays. IEEE Trans. Signal Inf. Process Netw. 4(2), 293–307 (2018)
Xu, Y.: First-order methods for constrained convex programming based on linearized augmented Lagrangian function. INFORMS Journal on Optimization to appear (2020)
Xu, Y.: Primal-dual stochastic gradient method for convex programs with many functional constraints. SIAM J. Optim. 30(2), 1664–1692 (2020)
Yan, M.: A new primal-dual algorithm for minimizing the sum of three functions with a linear operator. J. Sci. Comput. 76(3), 1698–1717 (2018)
Yang, J., Yuan, X.: Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization. Math Comput 82 (281), 301–329 (2013)
Acknowledgments
The authors would like to thank the anonymous reviewers for the helpful comments and suggestions that improve this paper.
Funding
This work was supported in part by the National Science Foundation (NSF) grants DMS-1621798 and DMS-2012439, the Natural Science Foundation of China grant 62001167.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by: Russell Luke
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Li, Z., Yan, M. New convergence analysis of a primal-dual algorithm with large stepsizes. Adv Comput Math 47, 9 (2021). https://doi.org/10.1007/s10444-020-09840-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10444-020-09840-9