Structural Watermarking to Deep Neural Networks via Network Channel Pruning

Xiangyu Zhao, Yinzhe Yao, Hanzhou Wu, Xinpeng Zhang
{"title":"Structural Watermarking to Deep Neural Networks via Network Channel Pruning","authors":"Xiangyu Zhao, Yinzhe Yao, Hanzhou Wu, Xinpeng Zhang","doi":"10.1109/WIFS53200.2021.9648376","DOIUrl":null,"url":null,"abstract":"In order to protect the intellectual property (IP) of deep neural networks (DNNs), many existing DNN watermarking techniques either embed watermarks directly into the DNN parameters or insert backdoor watermarks by fine-tuning the DNN parameters, which, however, cannot resist against various attack methods that remove watermarks by altering DNN parameters. In this paper, we bypass such attacks by introducing a structural watermarking scheme that utilizes channel pruning to embed the watermark into the host DNN architecture instead of crafting the DNN parameters. To be specific, during watermark embedding, we prune the internal channels of the host DNN with the channel pruning rates controlled by the watermark. During watermark extraction, the watermark is retrieved by identifying the channel pruning rates from the architecture of the target DNN model. Due to the superiority of pruning mechanism, the performance of the DNN model on its original task is reserved during watermark embedding. Experimental results have shown that, the proposed work enables the embedded watermark to be reliably recovered and provides a sufficient payload, without sacrificing the usability of the DNN model. It is also demonstrated that the proposed work is robust against common transforms and attacks designed for conventional watermarking approaches.","PeriodicalId":196985,"journal":{"name":"2021 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Workshop on Information Forensics and Security (WIFS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WIFS53200.2021.9648376","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14

Abstract

In order to protect the intellectual property (IP) of deep neural networks (DNNs), many existing DNN watermarking techniques either embed watermarks directly into the DNN parameters or insert backdoor watermarks by fine-tuning the DNN parameters, which, however, cannot resist against various attack methods that remove watermarks by altering DNN parameters. In this paper, we bypass such attacks by introducing a structural watermarking scheme that utilizes channel pruning to embed the watermark into the host DNN architecture instead of crafting the DNN parameters. To be specific, during watermark embedding, we prune the internal channels of the host DNN with the channel pruning rates controlled by the watermark. During watermark extraction, the watermark is retrieved by identifying the channel pruning rates from the architecture of the target DNN model. Due to the superiority of pruning mechanism, the performance of the DNN model on its original task is reserved during watermark embedding. Experimental results have shown that, the proposed work enables the embedded watermark to be reliably recovered and provides a sufficient payload, without sacrificing the usability of the DNN model. It is also demonstrated that the proposed work is robust against common transforms and attacks designed for conventional watermarking approaches.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于网络通道剪枝的深度神经网络结构水印
为了保护深度神经网络(DNN)的知识产权,现有的许多DNN水印技术要么直接将水印嵌入到DNN参数中,要么通过微调DNN参数插入后门水印,但无法抵御各种通过改变DNN参数来去除水印的攻击方法。在本文中,我们通过引入一种结构性水印方案来绕过这种攻击,该方案利用通道修剪将水印嵌入到主机DNN架构中,而不是制作DNN参数。具体来说,在水印嵌入过程中,我们对主机DNN的内部通道进行剪枝,由水印控制通道剪枝速率。在水印提取过程中,通过识别目标DNN模型的通道剪枝率来检索水印。由于修剪机制的优越性,在水印嵌入过程中,DNN模型保留了原有任务的性能。实验结果表明,在不牺牲深度神经网络模型可用性的前提下,该方法能够可靠地恢复嵌入的水印并提供足够的有效载荷。研究还表明,该方法对传统水印方法设计的常见变换和攻击具有鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
CNN Steganalyzers Leverage Local Embedding Artifacts Unsupervised JPEG Domain Adaptation for Practical Digital Image Forensics 3D Print-Scan Resilient Localized Mesh Watermarking Secure Collaborative Editing Using Secret Sharing How are PDF files published in the Scientific Community?
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1