Abstract
With the growth of the scale of data set and neural networks, the training time is increasing rapidly. Distributed parallel training has been proposed to accelerate deep neural network training, and most efforts are made on top of GPU clusters. This paper focuses on the performance of distributed parallel training in CPU clusters of supercomputer systems. Using resources at the supercomputer system of “Tianhe-2”, we conduct extensive evaluation of the performance of popular deep learning tools, including Caffe, TensorFlow, and BigDL, and several deep neural network models are tested, including AutoEncoder, LeNet, AlexNet and ResNet. The experiment results show that Caffe performs the best in communication efficiency and scalability. BigDL is the fastest in computing speed benefiting from its optimization for CPU, but it suffers from long communication delay due to the dependency on MapReduce framework. The insights and conclusions from our evaluation provides significant reference for improving resource utility of supercomputer resources in distributed deep learning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Cong, G., Kingsbury, B., Gosh, S., et al.: Accelerating deep neural network learning for speech recognition on a cluster of GPUs. In: Proceedings of the Machine Learning on HPC Environments, pp. 1–8. ACM, New York (2017)
Cui, H., Zhang, H., et al.: GeePS: scalable deep learning on distributed GPUs with a GPU-specialized parameter server. In: Proceedings of the Eleventh European Conference on Computer Systems, pp. 1–16. ACM, New York (2016)
Shi, S., Wang, Q., Xu, P., et al.: Benchmarking state-of-the-art deep learning software tools. In: 7th International Conference on Cloud Computing and Big Data, pp. 99–104. IEEE, Macau (2016)
Bahrampour, S., Ramakrishnan, N., Schott, L., et al.: Comparative study of deep learning software frameworks Computer Science (2015)
Jia, Y., Shelhamer, E., Donahue, J., et al.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp. 675–678. ACM, New York (2014)
Tang, Y.: TF.Learn: TensorFlow’s high-level module for distributed machine learning. CoRR 1612(04251) (2017)
Abadi, M., Barham, P., Chen, J., et al.: TensorFlow: a system for large-scale machine learning. In: OSDI 2016 Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, pp. 265–283. USENIX Association, Savannah (2016)
Yu, D., Eversole, A., Seltzer, M., et al.: An Introduction to computational Networks and the Computational Network Toolkit. Microsoft Research, Bangalore (2014)
Chen, T., Li, M., Li, Y., et al.: MXNet: a flexible and efficient machine learning library for heterogeneous distributed systems, CoRR 1512(01274) (2015)
Collobert, R., Kavukcuoglu, K., Farabet, C.: Torch7: a matlab-like environment for machine learning. Biglearn, Nips Workshop, pp. 1–6 (2012)
Caffe-oMPI. https://github.com/RickLee26/Caffe-oMPi
Liao, X., Xiao, L., Yang, C., et al.: MilkyWay-2 supercomputer: system and application. Front. Comput. Sci. 8(3), 345–356 (2015)
Baldi, P.: Autoencoders, unsupervised learning, and deep architectures. In: Proceedings of the 2011 International Conference on Unsupervised and Transfer Learning Workshop, Washington pp. 37–50 (2011)
Lecun, Y., Bottou, L., Bengio, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. IEEE, Las Vegas (2016)
Deng, J., Dong, W., Socher, R., et al.: ImageNet: a large-scale hierarchical image database. In: Computer Vision and Pattern Recognition, pp. 248–255. IEEE, Miami (2009)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolution neural networks. In: Advances in Neural Information Processing Systems, Lake Tahoe, Nevada, pp. 1097-1105 (2012)
Orozco, C.I., Iglesias, F., Buemi M.E., Berlles, J.J., et al.: Real-time gender recognition from face images using deep convolutional neural network. In: 7th Latin American Conference on Networked and Electronic Media (LACNEM) IET, Valparaiso, pp. 7–11 (2017)
Zbontar, J., LeCun, Y., et al.: Stereo matching by training a convolutional neural network to compare image patches. J. Mach. Learn. Res. 17(1), 2287–2318 (2016)
Hecht-Nielsen, R.: Theory of the backpropagation neural network. In: International 1989 Joint Conference on Neural Networks, pp. 593–605. IEEE, Washington (1989)
Szegedy, C., Liu, W., Jia, Y., et al.: Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9. IEEE, Boston (2015)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Comput. Sci. 1409(1556) (2014)
Shelhamer, E., Long, J., Darrell, T.: Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 640–651 (2014)
D’Informatique, D.E., Ese, N., Esent, P., et al.: Long short-term memory in recurrent neural networks. EPFL 9(8), 1735–1780 (2001)
November 2017 top500 list. https://www.top500.org/lists/2017/11/
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Du, X. et al. (2018). Comparative Study of Distributed Deep Learning Tools on Supercomputers. In: Vaidya, J., Li, J. (eds) Algorithms and Architectures for Parallel Processing. ICA3PP 2018. Lecture Notes in Computer Science(), vol 11334. Springer, Cham. https://doi.org/10.1007/978-3-030-05051-1_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-05051-1_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-05050-4
Online ISBN: 978-3-030-05051-1
eBook Packages: Computer ScienceComputer Science (R0)