{"title":"云和边缘机器学习中同态加密的实验评价","authors":"Joe Hrzich, Gunjan Basra, Talal Halabi","doi":"10.1109/KSE56063.2022.9953624","DOIUrl":null,"url":null,"abstract":"Machine Learning (ML)-based intelligent services are gradually becoming the leading service design and delivery model in edge computing, where user and device data is outsourced to take part of large-scale BigData analytics. This paradigm however entails challenging security and privacy concerns, which require rethinking the fundamental concepts behind performing ML. For instance, the encryption of sensitive data provides a straightforward solution that ensures data security and privacy. In particular, Homomorphic encryption allows arbitrary computation on encrypted data and has gained a lot of attention recently. However, it has not been fully adopted by edge computing-based ML due to its potential impact on classification accuracy and model performance. This paper conducts an experimental evaluation of different types of Homomorphic encryption techniques, namely, Partial, Somewhat, and Fully Homomorphic encryption over several ML models, which train on encrypted data and produce classification predictions based on encrypted input data. The results demonstrate two potential directions in the context of ML privacy at the network edge: privacy-preserving training and privacy-preserving classification. The performance of encryption-driven ML models is compared using different metrics such as accuracy and computation time for plaintext vs. encrypted text. This evaluation will guide future research in investigating which ML models perform better over encrypted data.","PeriodicalId":330865,"journal":{"name":"2022 14th International Conference on Knowledge and Systems Engineering (KSE)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Experimental Evaluation of Homomorphic Encryption in Cloud and Edge Machine Learning\",\"authors\":\"Joe Hrzich, Gunjan Basra, Talal Halabi\",\"doi\":\"10.1109/KSE56063.2022.9953624\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine Learning (ML)-based intelligent services are gradually becoming the leading service design and delivery model in edge computing, where user and device data is outsourced to take part of large-scale BigData analytics. This paradigm however entails challenging security and privacy concerns, which require rethinking the fundamental concepts behind performing ML. For instance, the encryption of sensitive data provides a straightforward solution that ensures data security and privacy. In particular, Homomorphic encryption allows arbitrary computation on encrypted data and has gained a lot of attention recently. However, it has not been fully adopted by edge computing-based ML due to its potential impact on classification accuracy and model performance. This paper conducts an experimental evaluation of different types of Homomorphic encryption techniques, namely, Partial, Somewhat, and Fully Homomorphic encryption over several ML models, which train on encrypted data and produce classification predictions based on encrypted input data. The results demonstrate two potential directions in the context of ML privacy at the network edge: privacy-preserving training and privacy-preserving classification. The performance of encryption-driven ML models is compared using different metrics such as accuracy and computation time for plaintext vs. encrypted text. This evaluation will guide future research in investigating which ML models perform better over encrypted data.\",\"PeriodicalId\":330865,\"journal\":{\"name\":\"2022 14th International Conference on Knowledge and Systems Engineering (KSE)\",\"volume\":\"55 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 14th International Conference on Knowledge and Systems Engineering (KSE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/KSE56063.2022.9953624\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 14th International Conference on Knowledge and Systems Engineering (KSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/KSE56063.2022.9953624","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Experimental Evaluation of Homomorphic Encryption in Cloud and Edge Machine Learning
Machine Learning (ML)-based intelligent services are gradually becoming the leading service design and delivery model in edge computing, where user and device data is outsourced to take part of large-scale BigData analytics. This paradigm however entails challenging security and privacy concerns, which require rethinking the fundamental concepts behind performing ML. For instance, the encryption of sensitive data provides a straightforward solution that ensures data security and privacy. In particular, Homomorphic encryption allows arbitrary computation on encrypted data and has gained a lot of attention recently. However, it has not been fully adopted by edge computing-based ML due to its potential impact on classification accuracy and model performance. This paper conducts an experimental evaluation of different types of Homomorphic encryption techniques, namely, Partial, Somewhat, and Fully Homomorphic encryption over several ML models, which train on encrypted data and produce classification predictions based on encrypted input data. The results demonstrate two potential directions in the context of ML privacy at the network edge: privacy-preserving training and privacy-preserving classification. The performance of encryption-driven ML models is compared using different metrics such as accuracy and computation time for plaintext vs. encrypted text. This evaluation will guide future research in investigating which ML models perform better over encrypted data.