Skip to main content

Fast Communication Structure for Asynchronous Distributed ADMM Under Unbalance Process Arrival Pattern

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2018 (ICANN 2018)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11139))

Included in the following conference series:

Abstract

The alternating direction method of multipliers (ADMM) is an algorithm for solving large-scale data optimization problems in machine learning. In order to reduce the communication delay in a distributed environment, asynchronous distributed ADMM (AD-ADMM) was proposed. However, due to the unbalance process arrival pattern existing in the multiprocessor cluster, the communication of the star structure used in AD-ADMM is inefficient. Moreover, the load in the entire cluster is unbalanced, resulting in a decrease of the data processing capacity. This paper proposes a hierarchical parameter server communication structure (HPS) and an asynchronous distributed ADMM (HAD-ADMM). The algorithm mitigates the unbalanced arrival problem through process grouping and scattered updating global variable, which basically achieves load balancing. Experiments show that the HAD-ADMM is highly efficient in a large-scale distributed environment and has no significant impact on convergence.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chen, Q., Cao, F.: Distributed support vector machine in master–slave mode. Neural Netw. Off. J. Int. Neural Netw. Soc. 101, 94 (2018)

    Article  Google Scholar 

  2. Taylor, G., Burmeister, R., Xu, Z., et al.: Training neural networks without gradients: a scalable ADMM approach. In: International Conference on International Conference on Machine Learning, pp. 2722–2731. JMLR.org (2016)

    Google Scholar 

  3. Glowinski, R., Marrocco, A.: On the solution of a class of non linear Dirichlet problems by a penalty-duality method and finite elements of order one. In: Marchuk, G.I. (ed.) Optimization Techniques IFIP Technical Conference. LNCS. Springer, Heidelberg (1975). https://doi.org/10.1007/978-3-662-38527-2_45

    Chapter  Google Scholar 

  4. Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math Appl. 2(1), 17–40 (1976)

    Article  Google Scholar 

  5. Boyd, S., Parikh, N., Chu, E., et al.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2010)

    Article  Google Scholar 

  6. Lin, T., Ma, S., Zhang, S.: On the global linear convergence of the ADMM with multi-block variables. SIAM J. Optim. 25(3), 1478–1497 (2014)

    Article  Google Scholar 

  7. Wang, Y., Yin, W., Zeng, J.: Global convergence of ADMM in nonconvex nonsmooth optimization. J. Sci. Comput., 1–35 (2018)

    Google Scholar 

  8. Zhang, R., Kwok, J.T.: Asynchronous distributed ADMM for consensus optimization. In: International Conference on Machine Learning, pp. II-1701. JMLR.org (2014)

    Google Scholar 

  9. Chang, T.H., Hong, M., Liao, W.C., et al.: Asynchronous distributed alternating direction method of multipliers: algorithm and convergence analysis. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 4781–4785. IEEE (2016)

    Google Scholar 

  10. Chang, T.H., Liao, W.C., Hong, M., et al.: Asynchronous distributed ADMM for large-scale optimization—Part II: linear convergence analysis and numerical performance. IEEE Trans. Signal Process. 64(12), 3131–3144 (2016)

    Article  MathSciNet  Google Scholar 

  11. Faraj, A., Patarasuk, P., Yuan, X.: A study of process arrival patterns for MPI collective operations. In: International Conference on Supercomputing, pp. 168–179. ACM (2007)

    Google Scholar 

  12. Patarasuk, P., Yuan, X.: Efficient MPI Bcast across different process arrival patterns. In: IEEE International Symposium on Parallel and Distributed Processing, pp. 1–11. IEEE (2009)

    Google Scholar 

  13. Qian, Y., Afsahi, A.: Process arrival pattern aware alltoall and allgather on InfiniBand clusters. Int. J. Parallel Program. 39(4), 473–493 (2011)

    Article  Google Scholar 

  14. Tipparaju, V., Nieplocha, J., Panda, D.: Fast collective operations using shared and remote memory access protocols on clusters. In: International Parallel & Distributed Processing Symposium, p. 84a (2003)

    Google Scholar 

  15. Liu, Z.Q., Song, J.Q., Lu, F.S., et al.: Optimizing method for improving the performance of MPI broadcast under unbalanced process arrival patterns. J. Softw. 22(10), 2509–2522 (2011)

    Article  Google Scholar 

  16. Smola, A., Narayanamurthy, S.: An architecture for parallel topic models. VLDB Endow. 3, 703–710 (2010)

    Article  Google Scholar 

  17. Xing, E.P., Ho, Q., Dai, W., et al.: Petuum: a new platform for distributed machine learning on big data. In: ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1335–1344. IEEE (2015)

    Article  Google Scholar 

  18. Li, M., Zhou, L., Yang, Z., Li, A., Xia, F.: Parameter server for distributed machine learning. In: Big Learning Workshop, pp. 1–10 (2013)

    Google Scholar 

Download references

Acknowledgements

This research was supported in part by Innovation Research program of Shanghai Municipal Education Commission under Grant 12ZZ094, and High-tech R&D Program of China under Grant 2009AA012201, and Shanghai Academic Leading Discipline Project J50103, and ZiQiang 4000 experimental environment of Shanghai University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongmei Lei .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, S., Lei, Y. (2018). Fast Communication Structure for Asynchronous Distributed ADMM Under Unbalance Process Arrival Pattern. In: Kůrková, V., Manolopoulos, Y., Hammer, B., Iliadis, L., Maglogiannis, I. (eds) Artificial Neural Networks and Machine Learning – ICANN 2018. ICANN 2018. Lecture Notes in Computer Science(), vol 11139. Springer, Cham. https://doi.org/10.1007/978-3-030-01418-6_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-01418-6_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-01417-9

  • Online ISBN: 978-3-030-01418-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics