Invisible DNN Watermarking Against Model Extraction Attack

IF 10.5 1区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS IEEE Transactions on Cybernetics Pub Date : 2024-12-24 DOI:10.1109/TCYB.2024.3514838
Zuping Xi;Zuomin Qu;Wei Lu;Xiangyang Luo;Xiaochun Cao
{"title":"Invisible DNN Watermarking Against Model Extraction Attack","authors":"Zuping Xi;Zuomin Qu;Wei Lu;Xiangyang Luo;Xiaochun Cao","doi":"10.1109/TCYB.2024.3514838","DOIUrl":null,"url":null,"abstract":"Deep neural network (DNN) models are widely used in various fields, such as pattern recognition and natural language processing, and provide considerable commercial value to their owners. Embedding a digital watermark in the model allows the legitimate owner to detect unauthorized use of the model. However, the existing DNN watermarking methods are vulnerable to model extraction attacks since the watermark task and the original model task are independent. In this article, a novel collaborative DNN watermarking framework is proposed to defend against model extraction attacks by establishing cooperation between the watermark generation and embedding. Specifically, the trigger samples are not only imperceptible to ensure perceptual stealth security but also infused with target-label information to guide the following feature associations. In the process of watermark embedding, the feature representation of trigger samples is forced to be similar to that of the task distribution samples via feature coupling. Consequently, the trigger samples from our framework can be recognized in the stolen model as task distribution samples, so that the ownership of the model can be successfully verified. Extensive experiments on CIFAR10, CIFAR100, and ImageNet demonstrate the effectiveness and superior performance of the proposed watermarking framework against various model extraction attacks.","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"55 2","pages":"800-811"},"PeriodicalIF":10.5000,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cybernetics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10813422/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Deep neural network (DNN) models are widely used in various fields, such as pattern recognition and natural language processing, and provide considerable commercial value to their owners. Embedding a digital watermark in the model allows the legitimate owner to detect unauthorized use of the model. However, the existing DNN watermarking methods are vulnerable to model extraction attacks since the watermark task and the original model task are independent. In this article, a novel collaborative DNN watermarking framework is proposed to defend against model extraction attacks by establishing cooperation between the watermark generation and embedding. Specifically, the trigger samples are not only imperceptible to ensure perceptual stealth security but also infused with target-label information to guide the following feature associations. In the process of watermark embedding, the feature representation of trigger samples is forced to be similar to that of the task distribution samples via feature coupling. Consequently, the trigger samples from our framework can be recognized in the stolen model as task distribution samples, so that the ownership of the model can be successfully verified. Extensive experiments on CIFAR10, CIFAR100, and ImageNet demonstrate the effectiveness and superior performance of the proposed watermarking framework against various model extraction attacks.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
抗模型提取攻击的不可见DNN水印
深度神经网络(Deep neural network, DNN)模型广泛应用于模式识别和自然语言处理等各个领域,并为其所有者提供了可观的商业价值。在模型中嵌入数字水印允许合法所有者检测未经授权使用模型。然而,现有的深度神经网络水印方法由于水印任务和原始模型任务是相互独立的,容易受到模型提取攻击。本文提出了一种新的协同深度神经网络水印框架,通过在水印生成和嵌入之间建立协作来防御模型提取攻击。具体来说,触发器样本不仅是不可察觉的,以确保感知隐身安全,而且还注入了目标标签信息,以指导后续的特征关联。在水印嵌入过程中,通过特征耦合迫使触发样本的特征表示与任务分布样本的特征表示相似。因此,来自我们框架的触发器样本可以在被盗模型中被识别为任务分布样本,从而可以成功验证模型的所有权。在CIFAR10、CIFAR100和ImageNet上的大量实验证明了所提出的水印框架对各种模型提取攻击的有效性和卓越性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Cybernetics
IEEE Transactions on Cybernetics COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
25.40
自引率
11.00%
发文量
1869
期刊介绍: The scope of the IEEE Transactions on Cybernetics includes computational approaches to the field of cybernetics. Specifically, the transactions welcomes papers on communication and control across machines or machine, human, and organizations. The scope includes such areas as computational intelligence, computer vision, neural networks, genetic algorithms, machine learning, fuzzy systems, cognitive systems, decision making, and robotics, to the extent that they contribute to the theme of cybernetics or demonstrate an application of cybernetics principles.
期刊最新文献
Guiding Multiagent Multitask Reinforcement Learning by a Hierarchical Framework With Logical Reward Shaping. DHS-AE: A Distributed Support Vector Machine With Adaptive Regularization Parameters for Different Data Distributions Toward Lightweight Dynamic Convolutional Neural Network Modeling for Soft Sensors Sampled-Data-Based Secure Synchronization Control of Delayed Coupled Fuzzy Inertial Neural Networks Under Deception Attacks A Coevolutionary Algorithm Based on Dominance and Decomposition for Constrained Multiobjective Optimization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1