{"title":"云计算中的隐私保护和可验证卷积神经网络推理与训练","authors":"Wei Cao , Wenting Shen , Jing Qin , Hao Lin","doi":"10.1016/j.future.2024.107560","DOIUrl":null,"url":null,"abstract":"<div><div>With the rapid development of cloud computing, outsourcing massive data and complex deep learning model to cloud servers (CSs) has become a popular trend, which also brings some security problems. One is that the model stored in the CSs may be corrupted, leading to incorrect inference and training results. The other is that the privacy of outsourced data and model may be compromised. However, existing privacy-preserving and verifiable inference schemes suffer from low detection probability, high communication overhead and substantial computational time. To solve the above problems, we propose a privacy-preserving and verifiable scheme for convolutional neural network inference and training in cloud computing. In our scheme, the model owner generates the authenticators for model parameters before uploading the model to CSs. In the phase of model integrity verification, model owner and user can utilize these authenticators to check model integrity with high detection probability. Furthermore, we design a set of privacy-preserving protocols based on replicated secret sharing for both the inference and training phases, significantly reducing communication overhead and computational time. Through security analysis, we demonstrate that our scheme is secure. Experimental evaluations show that the proposed scheme outperforms existing schemes in privacy-preserving inference and model integrity verification.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107560"},"PeriodicalIF":6.2000,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Privacy-preserving and verifiable convolution neural network inference and training in cloud computing\",\"authors\":\"Wei Cao , Wenting Shen , Jing Qin , Hao Lin\",\"doi\":\"10.1016/j.future.2024.107560\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>With the rapid development of cloud computing, outsourcing massive data and complex deep learning model to cloud servers (CSs) has become a popular trend, which also brings some security problems. One is that the model stored in the CSs may be corrupted, leading to incorrect inference and training results. The other is that the privacy of outsourced data and model may be compromised. However, existing privacy-preserving and verifiable inference schemes suffer from low detection probability, high communication overhead and substantial computational time. To solve the above problems, we propose a privacy-preserving and verifiable scheme for convolutional neural network inference and training in cloud computing. In our scheme, the model owner generates the authenticators for model parameters before uploading the model to CSs. In the phase of model integrity verification, model owner and user can utilize these authenticators to check model integrity with high detection probability. Furthermore, we design a set of privacy-preserving protocols based on replicated secret sharing for both the inference and training phases, significantly reducing communication overhead and computational time. Through security analysis, we demonstrate that our scheme is secure. Experimental evaluations show that the proposed scheme outperforms existing schemes in privacy-preserving inference and model integrity verification.</div></div>\",\"PeriodicalId\":55132,\"journal\":{\"name\":\"Future Generation Computer Systems-The International Journal of Escience\",\"volume\":\"164 \",\"pages\":\"Article 107560\"},\"PeriodicalIF\":6.2000,\"publicationDate\":\"2024-10-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Future Generation Computer Systems-The International Journal of Escience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167739X24005247\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X24005247","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
Privacy-preserving and verifiable convolution neural network inference and training in cloud computing
With the rapid development of cloud computing, outsourcing massive data and complex deep learning model to cloud servers (CSs) has become a popular trend, which also brings some security problems. One is that the model stored in the CSs may be corrupted, leading to incorrect inference and training results. The other is that the privacy of outsourced data and model may be compromised. However, existing privacy-preserving and verifiable inference schemes suffer from low detection probability, high communication overhead and substantial computational time. To solve the above problems, we propose a privacy-preserving and verifiable scheme for convolutional neural network inference and training in cloud computing. In our scheme, the model owner generates the authenticators for model parameters before uploading the model to CSs. In the phase of model integrity verification, model owner and user can utilize these authenticators to check model integrity with high detection probability. Furthermore, we design a set of privacy-preserving protocols based on replicated secret sharing for both the inference and training phases, significantly reducing communication overhead and computational time. Through security analysis, we demonstrate that our scheme is secure. Experimental evaluations show that the proposed scheme outperforms existing schemes in privacy-preserving inference and model integrity verification.
期刊介绍:
Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications.
Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration.
Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.