Attacks and Defenses for Generative Diffusion Models: A Comprehensive Survey

IF 28 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS ACM Computing Surveys Pub Date : 2025-03-04 DOI:10.1145/3721479
Vu Tuan Truong, Luan Ba Dang, Long Bao Le
{"title":"Attacks and Defenses for Generative Diffusion Models: A Comprehensive Survey","authors":"Vu Tuan Truong, Luan Ba Dang, Long Bao Le","doi":"10.1145/3721479","DOIUrl":null,"url":null,"abstract":"Diffusion models (DMs) have achieved state-of-the-art performance on various generative tasks such as image synthesis, text-to-image, and text-guided image-to-image generation. However, the more powerful the DMs, the more harmful they can potentially be. Recent studies have shown that DMs are prone to a wide range of attacks, including adversarial attacks, membership inference attacks, backdoor injection, and various multi-modal threats. Since numerous pre-trained DMs are published widely on the Internet, potential threats from these attacks are especially detrimental to the society, making DM-related security a topic worthy of investigation. Therefore, in this paper, we conduct a comprehensive survey on the security aspect of DMs, focusing on various attack and defense methods for DMs. First, we present crucial knowledge of DMs with five main types of DMs, including denoising diffusion probabilistic models, denoising diffusion implicit models, noise conditioned score networks, stochastic differential equations, and multi-modal conditional DMs. We provide a comprehensive survey of recent works investigating different types of attacks that exploit the vulnerabilities of DMs. Then, we thoroughly review potential countermeasures to mitigate each of the presented threats. Finally, we discuss open challenges of DM-related security and describe potential research directions for this topic.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"10 1","pages":""},"PeriodicalIF":28.0000,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Computing Surveys","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3721479","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Diffusion models (DMs) have achieved state-of-the-art performance on various generative tasks such as image synthesis, text-to-image, and text-guided image-to-image generation. However, the more powerful the DMs, the more harmful they can potentially be. Recent studies have shown that DMs are prone to a wide range of attacks, including adversarial attacks, membership inference attacks, backdoor injection, and various multi-modal threats. Since numerous pre-trained DMs are published widely on the Internet, potential threats from these attacks are especially detrimental to the society, making DM-related security a topic worthy of investigation. Therefore, in this paper, we conduct a comprehensive survey on the security aspect of DMs, focusing on various attack and defense methods for DMs. First, we present crucial knowledge of DMs with five main types of DMs, including denoising diffusion probabilistic models, denoising diffusion implicit models, noise conditioned score networks, stochastic differential equations, and multi-modal conditional DMs. We provide a comprehensive survey of recent works investigating different types of attacks that exploit the vulnerabilities of DMs. Then, we thoroughly review potential countermeasures to mitigate each of the presented threats. Finally, we discuss open challenges of DM-related security and describe potential research directions for this topic.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
生成扩散模型的攻击与防御:综合综述
扩散模型(DMs)已经在各种生成任务上取得了最先进的性能,例如图像合成、文本到图像和文本引导的图像到图像生成。然而,DMs越强大,潜在的危害就越大。最近的研究表明,dm容易受到各种攻击,包括对抗性攻击、成员推理攻击、后门注入攻击和各种多模式威胁。由于大量预先训练的dm在Internet上广泛发布,这些攻击的潜在威胁对社会尤其有害,因此与dm相关的安全性成为值得研究的主题。因此,在本文中,我们对dm的安全方面进行了全面的调查,重点介绍了dm的各种攻击和防御方法。首先,我们介绍了五种主要类型的离散模型的关键知识,包括去噪扩散概率模型、去噪扩散隐式模型、噪声条件评分网络、随机微分方程和多模态条件离散模型。我们提供了一个全面的调查,最近的工作调查不同类型的攻击,利用DMs的漏洞。然后,我们彻底审查潜在的对策,以减轻每个提出的威胁。最后,我们讨论了dm相关安全的公开挑战,并描述了该主题的潜在研究方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
ACM Computing Surveys
ACM Computing Surveys 工程技术-计算机:理论方法
CiteScore
33.20
自引率
0.60%
发文量
372
审稿时长
12 months
期刊介绍: ACM Computing Surveys is an academic journal that focuses on publishing surveys and tutorials on various areas of computing research and practice. The journal aims to provide comprehensive and easily understandable articles that guide readers through the literature and help them understand topics outside their specialties. In terms of impact, CSUR has a high reputation with a 2022 Impact Factor of 16.6. It is ranked 3rd out of 111 journals in the field of Computer Science Theory & Methods. ACM Computing Surveys is indexed and abstracted in various services, including AI2 Semantic Scholar, Baidu, Clarivate/ISI: JCR, CNKI, DeepDyve, DTU, EBSCO: EDS/HOST, and IET Inspec, among others.
期刊最新文献
Tabular Data Augmentation for Machine Learning: Progress and Prospects of Embracing Generative AI AI Reasoning for Wireless Communications and Networking: A Survey and Perspectives Human-Centric and Socio-Technical Design Support for Cyber-Physical Systems: A Systematic Investigation Energy-Efficient Resource Management in Microservices-Based Fog and Edge Computing: State-of-the-Art and Future Directions Federated Learning Applications in Healthcare Informatics: A Comprehensive Review
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1