Louay Ahmad, Boxiang Dong, B. Samanthula, Ryan Yang Wang, Bill Hui Li
{"title":"走向可信赖的外包深度神经网络","authors":"Louay Ahmad, Boxiang Dong, B. Samanthula, Ryan Yang Wang, Bill Hui Li","doi":"10.1109/IEEECloudSummit52029.2021.00021","DOIUrl":null,"url":null,"abstract":"The rising complexity of deep neural networks has raised rigorous demands for computational hardware and deployment expertise. As an alternative, outsourcing a pre-trained model to a third party server has been increasingly prevalent. However, it creates opportunities for attackers to interfere with the prediction outcomes of the deep neural network. In this paper, we focus on integrity verification of the prediction results from outsourced deep neural models and make a thread of contributions. We propose a new attack based on steganography that enables the server to generate wrong prediction results in a command-and-control fashion. Following that, we design a homomorphic encryption-based authentication scheme to detect wrong predictions made by any attack. Our extensive experiments on benchmark datasets demonstrate the invisibility of the attack and the effectiveness of our authentication approach.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"7 1","pages":"83-88"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Towards Trustworthy Outsourced Deep Neural Networks\",\"authors\":\"Louay Ahmad, Boxiang Dong, B. Samanthula, Ryan Yang Wang, Bill Hui Li\",\"doi\":\"10.1109/IEEECloudSummit52029.2021.00021\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The rising complexity of deep neural networks has raised rigorous demands for computational hardware and deployment expertise. As an alternative, outsourcing a pre-trained model to a third party server has been increasingly prevalent. However, it creates opportunities for attackers to interfere with the prediction outcomes of the deep neural network. In this paper, we focus on integrity verification of the prediction results from outsourced deep neural models and make a thread of contributions. We propose a new attack based on steganography that enables the server to generate wrong prediction results in a command-and-control fashion. Following that, we design a homomorphic encryption-based authentication scheme to detect wrong predictions made by any attack. Our extensive experiments on benchmark datasets demonstrate the invisibility of the attack and the effectiveness of our authentication approach.\",\"PeriodicalId\":54281,\"journal\":{\"name\":\"IEEE Cloud Computing\",\"volume\":\"7 1\",\"pages\":\"83-88\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Cloud Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IEEECloudSummit52029.2021.00021\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Cloud Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IEEECloudSummit52029.2021.00021","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Computer Science","Score":null,"Total":0}
Towards Trustworthy Outsourced Deep Neural Networks
The rising complexity of deep neural networks has raised rigorous demands for computational hardware and deployment expertise. As an alternative, outsourcing a pre-trained model to a third party server has been increasingly prevalent. However, it creates opportunities for attackers to interfere with the prediction outcomes of the deep neural network. In this paper, we focus on integrity verification of the prediction results from outsourced deep neural models and make a thread of contributions. We propose a new attack based on steganography that enables the server to generate wrong prediction results in a command-and-control fashion. Following that, we design a homomorphic encryption-based authentication scheme to detect wrong predictions made by any attack. Our extensive experiments on benchmark datasets demonstrate the invisibility of the attack and the effectiveness of our authentication approach.
期刊介绍:
Cessation.
IEEE Cloud Computing is committed to the timely publication of peer-reviewed articles that provide innovative research ideas, applications results, and case studies in all areas of cloud computing. Topics relating to novel theory, algorithms, performance analyses and applications of techniques are covered. More specifically: Cloud software, Cloud security, Trade-offs between privacy and utility of cloud, Cloud in the business environment, Cloud economics, Cloud governance, Migrating to the cloud, Cloud standards, Development tools, Backup and recovery, Interoperability, Applications management, Data analytics, Communications protocols, Mobile cloud, Private clouds, Liability issues for data loss on clouds, Data integration, Big data, Cloud education, Cloud skill sets, Cloud energy consumption, The architecture of cloud computing, Applications in commerce, education, and industry, Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS), Business Process as a Service (BPaaS)