A Survey on Large Language Model (LLM) Security and Privacy: The Good, The Bad, and The Ugly

IF 3.2 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS High-Confidence Computing Pub Date : 2024-03-01 DOI:10.1016/j.hcc.2024.100211
Yifan Yao, Jinhao Duan, Kaidi Xu, Yuanfang Cai, Zhibo Sun, Yue Zhang
{"title":"A Survey on Large Language Model (LLM) Security and Privacy: The Good, The Bad, and The Ugly","authors":"Yifan Yao,&nbsp;Jinhao Duan,&nbsp;Kaidi Xu,&nbsp;Yuanfang Cai,&nbsp;Zhibo Sun,&nbsp;Yue Zhang","doi":"10.1016/j.hcc.2024.100211","DOIUrl":null,"url":null,"abstract":"<div><p>Large Language Models (LLMs), such as ChatGPT and Bard, have revolutionized natural language understanding and generation. They possess deep language comprehension, human-like text generation capabilities, contextual awareness, and robust problem-solving skills, making them invaluable in various domains (e.g., search engines, customer support, translation). In the meantime, LLMs have also gained traction in the security community, revealing security vulnerabilities and showcasing their potential in security-related tasks. This paper explores the intersection of LLMs with security and privacy. Specifically, we investigate how LLMs positively impact security and privacy, potential risks and threats associated with their use, and inherent vulnerabilities within LLMs. Through a comprehensive literature review, the paper categorizes the papers into “The Good” (beneficial LLM applications), “The Bad” (offensive applications), and “The Ugly” (vulnerabilities of LLMs and their defenses). We have some interesting findings. For example, LLMs have proven to enhance code security (code vulnerability detection) and data privacy (data confidentiality protection), outperforming traditional methods. However, they can also be harnessed for various attacks (particularly user-level attacks) due to their human-like reasoning abilities. We have identified areas that require further research efforts. For example, Research on model and parameter extraction attacks is limited and often theoretical, hindered by LLM parameter scale and confidentiality. Safe instruction tuning, a recent development, requires more exploration. We hope that our work can shed light on the LLMs’ potential to both bolster and jeopardize cybersecurity.</p></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":null,"pages":null},"PeriodicalIF":3.2000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266729522400014X/pdfft?md5=1984f6886539e5ada13eeb8c49a9ef8b&pid=1-s2.0-S266729522400014X-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"High-Confidence Computing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S266729522400014X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Large Language Models (LLMs), such as ChatGPT and Bard, have revolutionized natural language understanding and generation. They possess deep language comprehension, human-like text generation capabilities, contextual awareness, and robust problem-solving skills, making them invaluable in various domains (e.g., search engines, customer support, translation). In the meantime, LLMs have also gained traction in the security community, revealing security vulnerabilities and showcasing their potential in security-related tasks. This paper explores the intersection of LLMs with security and privacy. Specifically, we investigate how LLMs positively impact security and privacy, potential risks and threats associated with their use, and inherent vulnerabilities within LLMs. Through a comprehensive literature review, the paper categorizes the papers into “The Good” (beneficial LLM applications), “The Bad” (offensive applications), and “The Ugly” (vulnerabilities of LLMs and their defenses). We have some interesting findings. For example, LLMs have proven to enhance code security (code vulnerability detection) and data privacy (data confidentiality protection), outperforming traditional methods. However, they can also be harnessed for various attacks (particularly user-level attacks) due to their human-like reasoning abilities. We have identified areas that require further research efforts. For example, Research on model and parameter extraction attacks is limited and often theoretical, hindered by LLM parameter scale and confidentiality. Safe instruction tuning, a recent development, requires more exploration. We hope that our work can shed light on the LLMs’ potential to both bolster and jeopardize cybersecurity.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
大型语言模型 (LLM) 安全与隐私调查:好、坏、丑
大型语言模型(LLMs),如 ChatGPT 和 Bard,已经彻底改变了自然语言的理解和生成。它们具有深度语言理解能力、类人文本生成能力、上下文感知能力和强大的问题解决能力,这使它们在各个领域(如搜索引擎、客户支持、翻译)都具有无价之宝。与此同时,LLM 在安全领域也获得了广泛关注,揭示了安全漏洞,并展示了其在安全相关任务中的潜力。本文探讨了 LLM 与安全和隐私的交叉点。具体来说,我们将研究 LLM 如何对安全和隐私产生积极影响、与使用 LLM 相关的潜在风险和威胁,以及 LLM 固有的漏洞。通过全面的文献综述,本文将论文分为 "好"(有益的 LLM 应用)、"坏"(攻击性应用)和 "丑"(LLM 的漏洞及其防御)三类。我们有一些有趣的发现。例如,事实证明 LLM 可增强代码安全性(代码漏洞检测)和数据私密性(数据保密保护),优于传统方法。不过,由于 LLM 具备类似人类的推理能力,它们也可以被用于各种攻击(尤其是用户级攻击)。我们已经确定了需要进一步研究的领域。例如,对模型和参数提取攻击的研究十分有限,而且往往是理论性的,受到 LLM 参数规模和保密性的阻碍。安全指令调整是最近的一项发展,需要更多的探索。我们希望我们的工作能够揭示 LLM 在促进和危害网络安全方面的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
4.70
自引率
0.00%
发文量
0
期刊最新文献
Navigating the Digital Twin Network landscape: A survey on architecture, applications, privacy and security Erratum to “An effective digital audio watermarking using a deep convolutional neural network with a search location optimization algorithm for improvement in Robustness and Imperceptibility” [High-Confid. Comput. 3 (2023) 100153] On Building Automation System security SoK: Decentralized Storage Network Exploring Personalized Internet of Things (PIoT), social connectivity, and Artificial Social Intelligence (ASI): A survey
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1