嵌入到机器学习模型中的水印安全验证协议

K. Kapusta, V. Thouvenot, Olivier Bettan, Hugo Beguinet, Hugo Senet
{"title":"嵌入到机器学习模型中的水印安全验证协议","authors":"K. Kapusta, V. Thouvenot, Olivier Bettan, Hugo Beguinet, Hugo Senet","doi":"10.1145/3437880.3460409","DOIUrl":null,"url":null,"abstract":"Machine Learning is a well established tool used in a variety of applications. As training advanced models requires considerable amounts of meaningful data in addition to specific knowledge, a new business model separate models creators from model users. Pre-trained models are sold or made available as a service. This raises several security challenges, among others the one of intellectual property protection. Therefore, a new research track actively seeks to provide techniques for model watermarking that would enable model identification in case of suspicion of model theft or misuse. In this paper, we focus on the problem of secure watermarks verification, which affects all of the proposed techniques and until now was barely tackled. First, we revisit the existing threat model. In particular, we explain the possible threats related to a semi-honest or dishonest verification authority. Secondly, we show how to reduce trust requirements between participants by performing the watermarks verification on encrypted data. Finally, we describe a novel secure verification protocol as well as detail its possible implementation using Multi-Party Computation. The proposed solution does not only preserve the confidentiality of the watermarks but also helps detecting evasion attacks. It could be adopted to work with other authentication schemes based on watermarking, especially with image watermarking schemes.","PeriodicalId":120300,"journal":{"name":"Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"A Protocol for Secure Verification of Watermarks Embedded into Machine Learning Models\",\"authors\":\"K. Kapusta, V. Thouvenot, Olivier Bettan, Hugo Beguinet, Hugo Senet\",\"doi\":\"10.1145/3437880.3460409\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine Learning is a well established tool used in a variety of applications. As training advanced models requires considerable amounts of meaningful data in addition to specific knowledge, a new business model separate models creators from model users. Pre-trained models are sold or made available as a service. This raises several security challenges, among others the one of intellectual property protection. Therefore, a new research track actively seeks to provide techniques for model watermarking that would enable model identification in case of suspicion of model theft or misuse. In this paper, we focus on the problem of secure watermarks verification, which affects all of the proposed techniques and until now was barely tackled. First, we revisit the existing threat model. In particular, we explain the possible threats related to a semi-honest or dishonest verification authority. Secondly, we show how to reduce trust requirements between participants by performing the watermarks verification on encrypted data. Finally, we describe a novel secure verification protocol as well as detail its possible implementation using Multi-Party Computation. The proposed solution does not only preserve the confidentiality of the watermarks but also helps detecting evasion attacks. It could be adopted to work with other authentication schemes based on watermarking, especially with image watermarking schemes.\",\"PeriodicalId\":120300,\"journal\":{\"name\":\"Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3437880.3460409\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3437880.3460409","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

摘要

机器学习是一种成熟的工具,用于各种应用程序。由于训练高级模型除了需要特定的知识外,还需要大量有意义的数据,因此新的业务模型将模型创建者与模型用户分开。预先训练的模型可以作为服务出售或提供。这带来了若干安全挑战,其中包括知识产权保护问题。因此,新的研究方向积极寻求提供模型水印技术,以便在怀疑模型被盗或滥用的情况下进行模型识别。在本文中,我们重点研究了安全水印验证的问题,这个问题影响了所有提出的技术,到目前为止还没有得到解决。首先,我们重新审视现有的威胁模型。特别是,我们解释了与半诚实或不诚实的验证机构相关的可能威胁。其次,我们展示了如何通过对加密数据进行水印验证来降低参与者之间的信任需求。最后,我们描述了一种新的安全验证协议,并详细介绍了使用多方计算实现该协议的可能性。该方案既保证了水印的保密性,又有助于检测逃避攻击。它可以与其他基于水印的认证方案协同工作,特别是与图像水印方案协同工作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Protocol for Secure Verification of Watermarks Embedded into Machine Learning Models
Machine Learning is a well established tool used in a variety of applications. As training advanced models requires considerable amounts of meaningful data in addition to specific knowledge, a new business model separate models creators from model users. Pre-trained models are sold or made available as a service. This raises several security challenges, among others the one of intellectual property protection. Therefore, a new research track actively seeks to provide techniques for model watermarking that would enable model identification in case of suspicion of model theft or misuse. In this paper, we focus on the problem of secure watermarks verification, which affects all of the proposed techniques and until now was barely tackled. First, we revisit the existing threat model. In particular, we explain the possible threats related to a semi-honest or dishonest verification authority. Secondly, we show how to reduce trust requirements between participants by performing the watermarks verification on encrypted data. Finally, we describe a novel secure verification protocol as well as detail its possible implementation using Multi-Party Computation. The proposed solution does not only preserve the confidentiality of the watermarks but also helps detecting evasion attacks. It could be adopted to work with other authentication schemes based on watermarking, especially with image watermarking schemes.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
General Requirements on Synthetic Fingerprint Images for Biometric Authentication and Forensic Investigations Information Hiding in Cyber Physical Systems: Challenges for Embedding, Retrieval and Detection using Sensor Data of the SWAT Dataset On the Robustness of Backdoor-based Watermarking in Deep Neural Networks Banners: Binarized Neural Networks with Replicated Secret Sharing Meta and Media Data Stream Forensics in the Encrypted Domain of Video Conferences
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1