WaterMAS: Sharpness-Aware Maximization for Neural Network Watermarking

Carl De Sousa Trias, Mihai Mitrea, Attilio Fiandrotti, Marco Cagnazzo, Sumanta Chaudhuri, Enzo Tartaglione
{"title":"WaterMAS: Sharpness-Aware Maximization for Neural Network Watermarking","authors":"Carl De Sousa Trias, Mihai Mitrea, Attilio Fiandrotti, Marco Cagnazzo, Sumanta Chaudhuri, Enzo Tartaglione","doi":"arxiv-2409.03902","DOIUrl":null,"url":null,"abstract":"Nowadays, deep neural networks are used for solving complex tasks in several\ncritical applications and protecting both their integrity and intellectual\nproperty rights (IPR) has become of utmost importance. To this end, we advance\nWaterMAS, a substitutive, white-box neural network watermarking method that\nimproves the trade-off among robustness, imperceptibility, and computational\ncomplexity, while making provisions for increased data payload and security.\nWasterMAS insertion keeps unchanged the watermarked weights while sharpening\ntheir underlying gradient space. The robustness is thus ensured by limiting the\nattack's strength: even small alterations of the watermarked weights would\nimpact the model's performance. The imperceptibility is ensured by inserting\nthe watermark during the training process. The relationship among the WaterMAS\ndata payload, imperceptibility, and robustness properties is discussed. The\nsecret key is represented by the positions of the weights conveying the\nwatermark, randomly chosen through multiple layers of the model. The security\nis evaluated by investigating the case in which an attacker would intercept the\nkey. The experimental validations consider 5 models and 2 tasks (VGG16,\nResNet18, MobileNetV3, SwinT for CIFAR10 image classification, and DeepLabV3\nfor Cityscapes image segmentation) as well as 4 types of attacks (Gaussian\nnoise addition, pruning, fine-tuning, and quantization). The code will be\nreleased open-source upon acceptance of the article.","PeriodicalId":501480,"journal":{"name":"arXiv - CS - Multimedia","volume":"56 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.03902","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Nowadays, deep neural networks are used for solving complex tasks in several critical applications and protecting both their integrity and intellectual property rights (IPR) has become of utmost importance. To this end, we advance WaterMAS, a substitutive, white-box neural network watermarking method that improves the trade-off among robustness, imperceptibility, and computational complexity, while making provisions for increased data payload and security. WasterMAS insertion keeps unchanged the watermarked weights while sharpening their underlying gradient space. The robustness is thus ensured by limiting the attack's strength: even small alterations of the watermarked weights would impact the model's performance. The imperceptibility is ensured by inserting the watermark during the training process. The relationship among the WaterMAS data payload, imperceptibility, and robustness properties is discussed. The secret key is represented by the positions of the weights conveying the watermark, randomly chosen through multiple layers of the model. The security is evaluated by investigating the case in which an attacker would intercept the key. The experimental validations consider 5 models and 2 tasks (VGG16, ResNet18, MobileNetV3, SwinT for CIFAR10 image classification, and DeepLabV3 for Cityscapes image segmentation) as well as 4 types of attacks (Gaussian noise addition, pruning, fine-tuning, and quantization). The code will be released open-source upon acceptance of the article.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
WaterMAS:神经网络水印的锐度感知最大化
如今,深度神经网络被用于解决一些关键应用中的复杂任务,保护其完整性和知识产权(IPR)已成为重中之重。为此,我们提出了一种替代性白盒神经网络水印方法--WaterMAS,该方法改进了鲁棒性、不可感知性和计算复杂性之间的权衡,同时为增加数据有效载荷和安全性做出了规定。因此,通过限制攻击强度确保了鲁棒性:即使是对水印权重的微小改动也会影响模型的性能。通过在训练过程中插入水印,确保了不可感知性。本文讨论了 WaterMAS 数据有效载荷、不可感知性和鲁棒性之间的关系。这些密钥由传递水印的权重位置表示,通过模型的多层随机选择。通过研究攻击者截获密钥的情况,对安全性进行了评估。实验验证考虑了 5 个模型和 2 个任务(VGG16、ResNet18、MobileNetV3、用于 CIFAR10 图像分类的 SwinT 和用于城市景观图像分割的 DeepLabV3)以及 4 种攻击类型(高斯噪声添加、剪枝、微调和量化)。文章一经接受,代码将开源发布。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Vista3D: Unravel the 3D Darkside of a Single Image MoRAG -- Multi-Fusion Retrieval Augmented Generation for Human Motion Efficient Low-Resolution Face Recognition via Bridge Distillation Enhancing Few-Shot Classification without Forgetting through Multi-Level Contrastive Constraints NVLM: Open Frontier-Class Multimodal LLMs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1