{"title":"面向云环境下深度学习模型的叛逆者跟踪方法","authors":"Yu Zhang, Linfeng Wei, Hailiang Li, Hexin Cai, Ying Wu","doi":"10.4018/ijghpc.301588","DOIUrl":null,"url":null,"abstract":"Cloud computing can speed up the training process of deep learning models. In this process, training data and model parameters stored in the cloud are prone to threats of being stolen. In model protection, model watermarking is a commonly used method. Using the adversarial example as model watermarking can make watermarked images have better concealment. Oriented from the signature mechanism in cryptography, a signature-based scheme is proposed to guarantee the performance of deep learning algorithms via identifying these adversarial examples. In the adversarial example generation stage, the corresponding signature information and classification information will be embedded in the noise space, so that the generated adversarial example will have implicit identity information, which can be verified by the secret key. The experiment using the ImageNet dataset shows that the adversarial examples generated by the authors’ scheme must be correctly recognized by the classifier with the secret key.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"30 1","pages":"1-17"},"PeriodicalIF":0.6000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Traitor Tracking Method Towards Deep Learning Models in Cloud Environments\",\"authors\":\"Yu Zhang, Linfeng Wei, Hailiang Li, Hexin Cai, Ying Wu\",\"doi\":\"10.4018/ijghpc.301588\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Cloud computing can speed up the training process of deep learning models. In this process, training data and model parameters stored in the cloud are prone to threats of being stolen. In model protection, model watermarking is a commonly used method. Using the adversarial example as model watermarking can make watermarked images have better concealment. Oriented from the signature mechanism in cryptography, a signature-based scheme is proposed to guarantee the performance of deep learning algorithms via identifying these adversarial examples. In the adversarial example generation stage, the corresponding signature information and classification information will be embedded in the noise space, so that the generated adversarial example will have implicit identity information, which can be verified by the secret key. The experiment using the ImageNet dataset shows that the adversarial examples generated by the authors’ scheme must be correctly recognized by the classifier with the secret key.\",\"PeriodicalId\":43565,\"journal\":{\"name\":\"International Journal of Grid and High Performance Computing\",\"volume\":\"30 1\",\"pages\":\"1-17\"},\"PeriodicalIF\":0.6000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Grid and High Performance Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.4018/ijghpc.301588\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Grid and High Performance Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4018/ijghpc.301588","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
A Traitor Tracking Method Towards Deep Learning Models in Cloud Environments
Cloud computing can speed up the training process of deep learning models. In this process, training data and model parameters stored in the cloud are prone to threats of being stolen. In model protection, model watermarking is a commonly used method. Using the adversarial example as model watermarking can make watermarked images have better concealment. Oriented from the signature mechanism in cryptography, a signature-based scheme is proposed to guarantee the performance of deep learning algorithms via identifying these adversarial examples. In the adversarial example generation stage, the corresponding signature information and classification information will be embedded in the noise space, so that the generated adversarial example will have implicit identity information, which can be verified by the secret key. The experiment using the ImageNet dataset shows that the adversarial examples generated by the authors’ scheme must be correctly recognized by the classifier with the secret key.