基于不可见水印的模型版权保护级联所有权验证框架

IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Concurrency and Computation-Practice & Experience Pub Date : 2025-02-10 DOI:10.1002/cpe.8394
Ruoxi Wang, Yujia Zhu, Xia Daoxun
{"title":"基于不可见水印的模型版权保护级联所有权验证框架","authors":"Ruoxi Wang,&nbsp;Yujia Zhu,&nbsp;Xia Daoxun","doi":"10.1002/cpe.8394","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Successfully training a model requires substantial computational power, excellent model design, and high training costs, which implies that a well-trained model holds significant commercial value. Protecting a trained Deep Neural Network (DNN) model from Intellectual Property (IP) infringement has become a matter of intense concern recently. Particularly, embedding and verifying watermarks in black-box models without accessing internal model parameters, while ensuring the robustness and invisibility of the watermark, remains a challenging issue. Unlike many existing methods, we propose a cascade ownership verification framework based on invisible watermarks, with a focus on how to effectively protect the copyright of black-box watermark models and detect unauthorized users' infringement behaviors. This framework consists of two parts: watermark generation and copyright verification. In the watermark generation phase, watermarked samples are generated from key samples and label images. The difference between watermarked samples and key samples is imperceptible, while a specific identifier has been injected into the watermarked samples, leaving a backdoor as an entry point for copyright verification. The copyright verification phase employs hypothesis testing to enhance the confidence level of verification. In image classification tasks based on MNIST, CIFAR-10, and CIFAR-100 datasets, experiments were conducted on several popular deep learning models. The experimental results show that this framework offers high security and effectiveness in protecting model copyrights and demonstrates strong robustness against pruning and fine-tuning attacks.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 4-5","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cascade Ownership Verification Framework Based on Invisible Watermark for Model Copyright Protection\",\"authors\":\"Ruoxi Wang,&nbsp;Yujia Zhu,&nbsp;Xia Daoxun\",\"doi\":\"10.1002/cpe.8394\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>Successfully training a model requires substantial computational power, excellent model design, and high training costs, which implies that a well-trained model holds significant commercial value. Protecting a trained Deep Neural Network (DNN) model from Intellectual Property (IP) infringement has become a matter of intense concern recently. Particularly, embedding and verifying watermarks in black-box models without accessing internal model parameters, while ensuring the robustness and invisibility of the watermark, remains a challenging issue. Unlike many existing methods, we propose a cascade ownership verification framework based on invisible watermarks, with a focus on how to effectively protect the copyright of black-box watermark models and detect unauthorized users' infringement behaviors. This framework consists of two parts: watermark generation and copyright verification. In the watermark generation phase, watermarked samples are generated from key samples and label images. The difference between watermarked samples and key samples is imperceptible, while a specific identifier has been injected into the watermarked samples, leaving a backdoor as an entry point for copyright verification. The copyright verification phase employs hypothesis testing to enhance the confidence level of verification. In image classification tasks based on MNIST, CIFAR-10, and CIFAR-100 datasets, experiments were conducted on several popular deep learning models. The experimental results show that this framework offers high security and effectiveness in protecting model copyrights and demonstrates strong robustness against pruning and fine-tuning attacks.</p>\\n </div>\",\"PeriodicalId\":55214,\"journal\":{\"name\":\"Concurrency and Computation-Practice & Experience\",\"volume\":\"37 4-5\",\"pages\":\"\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2025-02-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Concurrency and Computation-Practice & Experience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/cpe.8394\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.8394","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

成功地训练一个模型需要大量的计算能力、优秀的模型设计和高昂的训练成本,这意味着一个训练良好的模型具有重要的商业价值。保护训练好的深度神经网络(DNN)模型不受知识产权(IP)侵犯已成为人们关注的热点问题。特别是,在不访问内部模型参数的情况下在黑箱模型中嵌入和验证水印,同时保证水印的鲁棒性和不可见性,一直是一个具有挑战性的问题。与许多现有方法不同,本文提出了一种基于不可见水印的级联所有权验证框架,重点研究如何有效保护黑箱水印模型的版权,检测未经授权用户的侵权行为。该框架由水印生成和版权验证两部分组成。在水印生成阶段,由关键样本和标签图像生成水印样本。水印样本和关键样本之间的区别是难以察觉的,而水印样本中注入了特定的标识符,留下了一个后门作为版权验证的入口点。版权验证阶段采用假设检验来提高验证的置信度。在基于MNIST、CIFAR-10和CIFAR-100数据集的图像分类任务中,在几种流行的深度学习模型上进行了实验。实验结果表明,该框架在保护模型版权方面具有较高的安全性和有效性,对剪枝和微调攻击具有较强的鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Cascade Ownership Verification Framework Based on Invisible Watermark for Model Copyright Protection

Successfully training a model requires substantial computational power, excellent model design, and high training costs, which implies that a well-trained model holds significant commercial value. Protecting a trained Deep Neural Network (DNN) model from Intellectual Property (IP) infringement has become a matter of intense concern recently. Particularly, embedding and verifying watermarks in black-box models without accessing internal model parameters, while ensuring the robustness and invisibility of the watermark, remains a challenging issue. Unlike many existing methods, we propose a cascade ownership verification framework based on invisible watermarks, with a focus on how to effectively protect the copyright of black-box watermark models and detect unauthorized users' infringement behaviors. This framework consists of two parts: watermark generation and copyright verification. In the watermark generation phase, watermarked samples are generated from key samples and label images. The difference between watermarked samples and key samples is imperceptible, while a specific identifier has been injected into the watermarked samples, leaving a backdoor as an entry point for copyright verification. The copyright verification phase employs hypothesis testing to enhance the confidence level of verification. In image classification tasks based on MNIST, CIFAR-10, and CIFAR-100 datasets, experiments were conducted on several popular deep learning models. The experimental results show that this framework offers high security and effectiveness in protecting model copyrights and demonstrates strong robustness against pruning and fine-tuning attacks.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Concurrency and Computation-Practice & Experience
Concurrency and Computation-Practice & Experience 工程技术-计算机:理论方法
CiteScore
5.00
自引率
10.00%
发文量
664
审稿时长
9.6 months
期刊介绍: Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of: Parallel and distributed computing; High-performance computing; Computational and data science; Artificial intelligence and machine learning; Big data applications, algorithms, and systems; Network science; Ontologies and semantics; Security and privacy; Cloud/edge/fog computing; Green computing; and Quantum computing.
期刊最新文献
Additional-Processing-Free Multiparty Reversible Data Hiding Over Encrypted Domain A Novel Ensemble Machine Learning Approach for Interpretable Modeling, Feature Extraction and Selection With Applications to Medical and Biomedical Signals and Data NOA-RAC: An Enhanced Nutcracker Optimization Algorithm for Optimization Tasks CG-YOLOv11: A Smoke-Removal-Enhanced Target Detection Method for Indoor Smoke Scenes A Complexity Calculation Method for Large Scale Optimization With Evolutionary Algorithms and Metaheuristics
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1