{"title":"Efficient and Privacy-preserving Distributed Learning in Cloud-Edge Computing Systems","authors":"Yili Jiang, Kuan Zhang, Y. Qian, R. Hu","doi":"10.1145/3468218.3469044","DOIUrl":null,"url":null,"abstract":"Machine learning and cloud computing have been integrated in diverse applications to provide intelligent services. With powerful computational ability, the cloud server can execute machine learning algorithm efficiently. However, since accurate machine learning highly depends on training the model with sufficient data. Transmitting massive raw data from distributed devices to the cloud leads to heavy communication overhead and privacy leakage. Distributed learning is a promising technique to reduce data transmission by allowing the distributed devices to participant in model training locally. Thus a global learning task can be performed in a distributed way. Although it avoids to disclose the participants' raw data to the cloud directly, the cloud can infer partial private information by analyzing their local models. To tackle this challenge, the state-of-the-art solutions mainly rely on encryption and differential privacy. In this paper, we propose to implement the distributed learning in a three-layer cloud-edge computing system. By applying the mini-batch gradient decent, we can decompose a learning task to distributed edge nodes and participants hierarchically. To improve the communication efficiency while preserving privacy, we employ secure aggregation protocol in small groups by utilizing the social network of participants. Simulation results are presented to show the effectiveness of our proposed scheme in terms of learning accuracy and efficiency.","PeriodicalId":318719,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning","volume":"79 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3468218.3469044","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Machine learning and cloud computing have been integrated in diverse applications to provide intelligent services. With powerful computational ability, the cloud server can execute machine learning algorithm efficiently. However, since accurate machine learning highly depends on training the model with sufficient data. Transmitting massive raw data from distributed devices to the cloud leads to heavy communication overhead and privacy leakage. Distributed learning is a promising technique to reduce data transmission by allowing the distributed devices to participant in model training locally. Thus a global learning task can be performed in a distributed way. Although it avoids to disclose the participants' raw data to the cloud directly, the cloud can infer partial private information by analyzing their local models. To tackle this challenge, the state-of-the-art solutions mainly rely on encryption and differential privacy. In this paper, we propose to implement the distributed learning in a three-layer cloud-edge computing system. By applying the mini-batch gradient decent, we can decompose a learning task to distributed edge nodes and participants hierarchically. To improve the communication efficiency while preserving privacy, we employ secure aggregation protocol in small groups by utilizing the social network of participants. Simulation results are presented to show the effectiveness of our proposed scheme in terms of learning accuracy and efficiency.