{"title":"深度学习模型的可信赖和隐私友好的所有权监管框架","authors":"Xirong Zhuang;Lan Zhang;Chen Tang;Yaliang Li","doi":"10.1109/TIFS.2024.3518061","DOIUrl":null,"url":null,"abstract":"Well-trained deep learning (DL) models are widely recognized as valuable intellectual property (IP) and have been extensively adopted. However, concerns regarding IP infringement emerge when these models are either privately sold to end-users or publicly released online. Unauthorized activities, such as redistributing privately purchased models or exploiting restricted open-source models for commercial gain, pose a significant threat to the interests of model owners. In this paper, we introduce D\n<sc>eep</small>\nR\n<sc>eg</small>\n, a trustworthy and privacy-friendly regulatory framework designed to address IP infringement within the realm of DL models, thereby nurturing a healthier development ecosystem. D\n<sc>eep</small>\nR\n<sc>eg</small>\n enables a designated third-party regulator to extract the fingerprint of the original model within a Trusted Execution Environment, as well as to verify suspect models utilizing solely the predicted label without probability. Specifically, we leverage the uniqueness of feature extractors in DL models to craft multiple synthetic inputs for a selected real input. The real input, along with its synthetic inputs, establishes a one-to-many relationship, thereby creating a unique fingerprint for the original model. Furthermore, we propose two distinct methods for suspect detection and piracy judgment. These methods analyze the responses from the model API upon feeding the fingerprint, ensuring a high level of confidence while preventing malicious accusations. Experimental results demonstrate that D\n<sc>eep</small>\nR\n<sc>eg</small>\n achieves 100% detection accuracy for pirated models, with zero false positives for irrelevant models.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"854-870"},"PeriodicalIF":6.3000,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DeepReg: A Trustworthy and Privacy-Friendly Ownership Regulatory Framework for Deep Learning Models\",\"authors\":\"Xirong Zhuang;Lan Zhang;Chen Tang;Yaliang Li\",\"doi\":\"10.1109/TIFS.2024.3518061\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Well-trained deep learning (DL) models are widely recognized as valuable intellectual property (IP) and have been extensively adopted. However, concerns regarding IP infringement emerge when these models are either privately sold to end-users or publicly released online. Unauthorized activities, such as redistributing privately purchased models or exploiting restricted open-source models for commercial gain, pose a significant threat to the interests of model owners. In this paper, we introduce D\\n<sc>eep</small>\\nR\\n<sc>eg</small>\\n, a trustworthy and privacy-friendly regulatory framework designed to address IP infringement within the realm of DL models, thereby nurturing a healthier development ecosystem. D\\n<sc>eep</small>\\nR\\n<sc>eg</small>\\n enables a designated third-party regulator to extract the fingerprint of the original model within a Trusted Execution Environment, as well as to verify suspect models utilizing solely the predicted label without probability. Specifically, we leverage the uniqueness of feature extractors in DL models to craft multiple synthetic inputs for a selected real input. The real input, along with its synthetic inputs, establishes a one-to-many relationship, thereby creating a unique fingerprint for the original model. Furthermore, we propose two distinct methods for suspect detection and piracy judgment. These methods analyze the responses from the model API upon feeding the fingerprint, ensuring a high level of confidence while preventing malicious accusations. Experimental results demonstrate that D\\n<sc>eep</small>\\nR\\n<sc>eg</small>\\n achieves 100% detection accuracy for pirated models, with zero false positives for irrelevant models.\",\"PeriodicalId\":13492,\"journal\":{\"name\":\"IEEE Transactions on Information Forensics and Security\",\"volume\":\"20 \",\"pages\":\"854-870\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2024-12-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Information Forensics and Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10803000/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10803000/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
DeepReg: A Trustworthy and Privacy-Friendly Ownership Regulatory Framework for Deep Learning Models
Well-trained deep learning (DL) models are widely recognized as valuable intellectual property (IP) and have been extensively adopted. However, concerns regarding IP infringement emerge when these models are either privately sold to end-users or publicly released online. Unauthorized activities, such as redistributing privately purchased models or exploiting restricted open-source models for commercial gain, pose a significant threat to the interests of model owners. In this paper, we introduce D
eep
R
eg
, a trustworthy and privacy-friendly regulatory framework designed to address IP infringement within the realm of DL models, thereby nurturing a healthier development ecosystem. D
eep
R
eg
enables a designated third-party regulator to extract the fingerprint of the original model within a Trusted Execution Environment, as well as to verify suspect models utilizing solely the predicted label without probability. Specifically, we leverage the uniqueness of feature extractors in DL models to craft multiple synthetic inputs for a selected real input. The real input, along with its synthetic inputs, establishes a one-to-many relationship, thereby creating a unique fingerprint for the original model. Furthermore, we propose two distinct methods for suspect detection and piracy judgment. These methods analyze the responses from the model API upon feeding the fingerprint, ensuring a high level of confidence while preventing malicious accusations. Experimental results demonstrate that D
eep
R
eg
achieves 100% detection accuracy for pirated models, with zero false positives for irrelevant models.
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features