Abstract
Federated learning is a new machine learning paradigm for distributed data. It enables multi-party cooperation to train global models without sharing their private data. In the classic federated learning protocol, the model parameters are the interaction information between the client and the server. The client can update the local model according to the global model parameters, and the server can aggregate the updated model parameters of each client to obtain a new aggregation model. However, in the actual federated learning scenario, there are still privacy problems caused by model stealing attack in collaborative learning using model parameters as interactive information. Therefore, we use knowledge distillation technology to avoid the model stealing attack in federated learning, and on this basis, we propose a novel aggregation scheme, which can make the output distribution of each customer refine the aggregation results through model training. Experiments show that the scheme can achieve normal convergence while ensuring privacy security, and reduce the number of interactions between client and server, thus reducing the resource consumption of each client participating in federated learning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282 (2017)
McMahan, H.B., Moore, E., Ramage, D., y Arcas, B.A.: Federated learning of deep networks using model averaging
Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. 10(2), 1–19 (2019)
Zhao, Y., Li, M., Lai, L., Suda, N., Chandra, V.: Federated learning with Non-IID data
Sattler, F., Wiedemann, S., Müller, K.-R., Samek, W.: Robust and communication-efficient federated learning from Non-IID data
Mohri, M., Sivek, G., Suresh, A.T.: Agnostic federated learning. arXiv preprint arXiv:1902.00146
Liu, Y., et al.: Fedvision: an online visual object detection platform powered by federated learning. arXiv: Learning
Choudhury, O., et al.: Differential privacy-enabled federated learning for sensitive health data. arXiv: Learning
Li, D., Wang, J.: FedMD: Heterogenous federated learning via model distillation. arXiv preprint arXiv:1910.03581
Jeong, E., Oh, S., Kim, H., Park, J., Bennis, M., Kim, S.-L.: Communication-efficient on-device machine learning: federated distillation and augmentation under Non-IID private data. arXiv preprint arXiv:1811.11479
Chang, H., Shejwalkar, V., Shokri, R., Houmansadr, A.: Cronus: robust and heterogeneous collaborative learning with black-box knowledge transfer. arXiv preprint arXiv:1912.11279
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531
Zhang, Y., Xiang, T., Hospedales, T.M., Lu, H.: Deep mutual learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4320–4328 (2018)
Wang, J., Bao, W., Sun, L., Zhu, X., Cao, B., Philip, S.Y.: Private model compression via knowledge distillation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 1190–1197 (2019)
Chen, H.: Data-free learning of student networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3514–3522 (2019)
Lecun, Y., Bottou, L.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Caldas, S., et al.: Leaf: a benchmark for federated settings
Kairouz, P., et al.: Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977
Team, T.: Tensorflow convolutional neural networks tutorial. http://www.tensorflow.org/tutorials/deepcnn
Acknowledgement
The research leading to these results has received funding from China Postdoctoral Science Foundation (2020M682658).
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Hu, L., Yan, H., Zhang, Z. (2021). Training Aggregation in Federated Learning. In: Cheng, J., Tang, X., Liu, X. (eds) Cyberspace Safety and Security. CSS 2020. Lecture Notes in Computer Science(), vol 12653. Springer, Cham. https://doi.org/10.1007/978-3-030-73671-2_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-73671-2_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-73670-5
Online ISBN: 978-3-030-73671-2
eBook Packages: Computer ScienceComputer Science (R0)