PoisonedEncoder:毒害对比学习中未标记的预训练数据

Hongbin Liu, Jinyuan Jia, N. Gong
{"title":"PoisonedEncoder:毒害对比学习中未标记的预训练数据","authors":"Hongbin Liu, Jinyuan Jia, N. Gong","doi":"10.48550/arXiv.2205.06401","DOIUrl":null,"url":null,"abstract":"Contrastive learning pre-trains an image encoder using a large amount of unlabeled data such that the image encoder can be used as a general-purpose feature extractor for various downstream tasks. In this work, we propose PoisonedEncoder , a data poisoning attack to contrastive learning. In particular, an attacker injects carefully crafted poisoning inputs into the unlabeled pre-training data, such that the downstream classifiers built based on the poisoned encoder for multiple target downstream tasks simultaneously classify attacker-chosen, arbitrary clean inputs as attacker-chosen, arbitrary classes. We formulate our data poisoning attack as a bilevel optimization problem, whose solution is the set of poisoning inputs; and we propose a contrastive-learning-tailored method to approximately solve it. Our evaluation on multiple datasets shows that PoisonedEncoder achieves high attack success rates while maintaining the testing accuracy of the downstream classifiers built upon the poisoned encoder for non-attacker-chosen inputs. We also evaluate five defenses against PoisonedEncoder, including one pre-processing , three in-processing , and one post-processing defenses. Our results show that these defenses can decrease the attack success rate of PoisonedEncoder, but they also sacrifice the utility of the encoder or require a large clean pre-training dataset.","PeriodicalId":91597,"journal":{"name":"Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium","volume":"10 1","pages":"3629-3645"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":"{\"title\":\"PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning\",\"authors\":\"Hongbin Liu, Jinyuan Jia, N. Gong\",\"doi\":\"10.48550/arXiv.2205.06401\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Contrastive learning pre-trains an image encoder using a large amount of unlabeled data such that the image encoder can be used as a general-purpose feature extractor for various downstream tasks. In this work, we propose PoisonedEncoder , a data poisoning attack to contrastive learning. In particular, an attacker injects carefully crafted poisoning inputs into the unlabeled pre-training data, such that the downstream classifiers built based on the poisoned encoder for multiple target downstream tasks simultaneously classify attacker-chosen, arbitrary clean inputs as attacker-chosen, arbitrary classes. We formulate our data poisoning attack as a bilevel optimization problem, whose solution is the set of poisoning inputs; and we propose a contrastive-learning-tailored method to approximately solve it. Our evaluation on multiple datasets shows that PoisonedEncoder achieves high attack success rates while maintaining the testing accuracy of the downstream classifiers built upon the poisoned encoder for non-attacker-chosen inputs. We also evaluate five defenses against PoisonedEncoder, including one pre-processing , three in-processing , and one post-processing defenses. Our results show that these defenses can decrease the attack success rate of PoisonedEncoder, but they also sacrifice the utility of the encoder or require a large clean pre-training dataset.\",\"PeriodicalId\":91597,\"journal\":{\"name\":\"Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium\",\"volume\":\"10 1\",\"pages\":\"3629-3645\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.48550/arXiv.2205.06401\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2205.06401","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15

摘要

对比学习使用大量未标记数据对图像编码器进行预训练,使图像编码器可以用作各种下游任务的通用特征提取器。在这项工作中,我们提出了PoisonedEncoder,一种针对对比学习的数据中毒攻击。特别是,攻击者将精心制作的中毒输入注入未标记的预训练数据中,这样,基于中毒编码器为多个目标下游任务构建的下游分类器同时将攻击者选择的任意干净输入分类为攻击者选择的任意类。我们将数据投毒攻击描述为一个双层优化问题,其解为投毒输入集;并提出了一种基于对比学习的近似求解方法。我们对多个数据集的评估表明,PoisonedEncoder实现了高攻击成功率,同时保持了基于中毒编码器构建的下游分类器对非攻击者选择输入的测试准确性。我们还评估了针对PoisonedEncoder的五种防御,包括一种预处理,三种处理中防御和一种后处理防御。我们的研究结果表明,这些防御可以降低PoisonedEncoder的攻击成功率,但它们也牺牲了编码器的实用性或需要大量干净的预训练数据集。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Contrastive learning pre-trains an image encoder using a large amount of unlabeled data such that the image encoder can be used as a general-purpose feature extractor for various downstream tasks. In this work, we propose PoisonedEncoder , a data poisoning attack to contrastive learning. In particular, an attacker injects carefully crafted poisoning inputs into the unlabeled pre-training data, such that the downstream classifiers built based on the poisoned encoder for multiple target downstream tasks simultaneously classify attacker-chosen, arbitrary clean inputs as attacker-chosen, arbitrary classes. We formulate our data poisoning attack as a bilevel optimization problem, whose solution is the set of poisoning inputs; and we propose a contrastive-learning-tailored method to approximately solve it. Our evaluation on multiple datasets shows that PoisonedEncoder achieves high attack success rates while maintaining the testing accuracy of the downstream classifiers built upon the poisoned encoder for non-attacker-chosen inputs. We also evaluate five defenses against PoisonedEncoder, including one pre-processing , three in-processing , and one post-processing defenses. Our results show that these defenses can decrease the attack success rate of PoisonedEncoder, but they also sacrifice the utility of the encoder or require a large clean pre-training dataset.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Research on the Security of Visual Reasoning CAPTCHA A Highly Accurate Query-Recovery Attack against Searchable Encryption using Non-Indexed Documents Hot Pixels: Frequency, Power, and Temperature Attacks on GPUs and ARM SoCs PTW: Pivotal Tuning Watermarking for Pre-Trained Image Generators Inductive Graph Unlearning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1