语言模型的阴暗面:探索语言模型在多媒体虚假信息生成和传播中的潜力

Dipto Barman, Ziyi Guo, Owen Conlan
{"title":"语言模型的阴暗面:探索语言模型在多媒体虚假信息生成和传播中的潜力","authors":"Dipto Barman,&nbsp;Ziyi Guo,&nbsp;Owen Conlan","doi":"10.1016/j.mlwa.2024.100545","DOIUrl":null,"url":null,"abstract":"<div><p>Disinformation - the deliberate spread of false or misleading information poses a significant threat to our society by undermining trust, exacerbating polarization, and manipulating public opinion. With the rapid advancement of artificial intelligence and the growing prominence of large language models (LLMs) such as ChatGPT, new avenues for the dissemination of disinformation are emerging. This review paper explores the potential of LLMs to initiate the generation of multi-media disinformation, encompassing text, images, audio, and video. We begin by examining the capabilities of LLMs, highlighting their potential to create compelling, context-aware content that can be weaponized for malicious purposes. Subsequently, we examine the nature of disinformation and the various mechanisms through which it spreads in the digital landscape. Utilizing these advanced models, malicious actors can automate and scale up disinformation effectively. We describe a theoretical pipeline for creating and disseminating disinformation on social media. Existing interventions to combat disinformation are also reviewed. While these efforts have shown success, we argue that they need to be strengthened to effectively counter the escalating threat posed by LLMs. Digital platforms have, unfortunately, enabled malicious actors to extend the reach of disinformation. The advent of LLMs poses an additional concern as they can be harnessed to significantly amplify the velocity, variety, and volume of disinformation. Thus, this review proposes augmenting current interventions with AI tools like LLMs, capable of assessing information more swiftly and comprehensively than human fact-checkers. This paper illuminates the dark side of LLMs and highlights their potential to be exploited as disinformation dissemination tools.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100545"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000215/pdfft?md5=62d261346a52f0843148ea85c02785d0&pid=1-s2.0-S2666827024000215-main.pdf","citationCount":"0","resultStr":"{\"title\":\"The Dark Side of Language Models: Exploring the Potential of LLMs in Multimedia Disinformation Generation and Dissemination\",\"authors\":\"Dipto Barman,&nbsp;Ziyi Guo,&nbsp;Owen Conlan\",\"doi\":\"10.1016/j.mlwa.2024.100545\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Disinformation - the deliberate spread of false or misleading information poses a significant threat to our society by undermining trust, exacerbating polarization, and manipulating public opinion. With the rapid advancement of artificial intelligence and the growing prominence of large language models (LLMs) such as ChatGPT, new avenues for the dissemination of disinformation are emerging. This review paper explores the potential of LLMs to initiate the generation of multi-media disinformation, encompassing text, images, audio, and video. We begin by examining the capabilities of LLMs, highlighting their potential to create compelling, context-aware content that can be weaponized for malicious purposes. Subsequently, we examine the nature of disinformation and the various mechanisms through which it spreads in the digital landscape. Utilizing these advanced models, malicious actors can automate and scale up disinformation effectively. We describe a theoretical pipeline for creating and disseminating disinformation on social media. Existing interventions to combat disinformation are also reviewed. While these efforts have shown success, we argue that they need to be strengthened to effectively counter the escalating threat posed by LLMs. Digital platforms have, unfortunately, enabled malicious actors to extend the reach of disinformation. The advent of LLMs poses an additional concern as they can be harnessed to significantly amplify the velocity, variety, and volume of disinformation. Thus, this review proposes augmenting current interventions with AI tools like LLMs, capable of assessing information more swiftly and comprehensively than human fact-checkers. This paper illuminates the dark side of LLMs and highlights their potential to be exploited as disinformation dissemination tools.</p></div>\",\"PeriodicalId\":74093,\"journal\":{\"name\":\"Machine learning with applications\",\"volume\":\"16 \",\"pages\":\"Article 100545\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2666827024000215/pdfft?md5=62d261346a52f0843148ea85c02785d0&pid=1-s2.0-S2666827024000215-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Machine learning with applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666827024000215\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning with applications","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666827024000215","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

虚假信息--蓄意传播虚假或误导性信息,通过破坏信任、加剧两极分化和操纵公众舆论,对我们的社会构成重大威胁。随着人工智能的快速发展和大型语言模型(LLM)(如 ChatGPT)的日益突出,传播虚假信息的新途径正在出现。本评论文章探讨了 LLMs 在生成多媒体虚假信息方面的潜力,包括文本、图像、音频和视频。我们首先研究了 LLMs 的能力,强调了它们在创建引人注目、可感知上下文的内容方面的潜力,这些内容可被用于恶意目的。随后,我们研究了虚假信息的本质及其在数字环境中传播的各种机制。利用这些先进的模型,恶意行为者可以有效地自动化和扩大虚假信息的规模。我们描述了在社交媒体上制造和传播虚假信息的理论管道。我们还回顾了打击虚假信息的现有干预措施。虽然这些努力取得了成功,但我们认为需要加强这些努力,以有效应对 LLM 构成的不断升级的威胁。不幸的是,数字平台使恶意行为者得以扩大虚假信息的传播范围。非法移民的出现带来了额外的担忧,因为他们可以利用这些平台大大提高虚假信息的传播速度、种类和数量。因此,本综述建议利用 LLMs 等人工智能工具加强当前的干预措施,因为它们能够比人类事实核查人员更迅速、更全面地评估信息。本文揭示了 LLMs 的阴暗面,并强调了其作为虚假信息传播工具被利用的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The Dark Side of Language Models: Exploring the Potential of LLMs in Multimedia Disinformation Generation and Dissemination

Disinformation - the deliberate spread of false or misleading information poses a significant threat to our society by undermining trust, exacerbating polarization, and manipulating public opinion. With the rapid advancement of artificial intelligence and the growing prominence of large language models (LLMs) such as ChatGPT, new avenues for the dissemination of disinformation are emerging. This review paper explores the potential of LLMs to initiate the generation of multi-media disinformation, encompassing text, images, audio, and video. We begin by examining the capabilities of LLMs, highlighting their potential to create compelling, context-aware content that can be weaponized for malicious purposes. Subsequently, we examine the nature of disinformation and the various mechanisms through which it spreads in the digital landscape. Utilizing these advanced models, malicious actors can automate and scale up disinformation effectively. We describe a theoretical pipeline for creating and disseminating disinformation on social media. Existing interventions to combat disinformation are also reviewed. While these efforts have shown success, we argue that they need to be strengthened to effectively counter the escalating threat posed by LLMs. Digital platforms have, unfortunately, enabled malicious actors to extend the reach of disinformation. The advent of LLMs poses an additional concern as they can be harnessed to significantly amplify the velocity, variety, and volume of disinformation. Thus, this review proposes augmenting current interventions with AI tools like LLMs, capable of assessing information more swiftly and comprehensively than human fact-checkers. This paper illuminates the dark side of LLMs and highlights their potential to be exploited as disinformation dissemination tools.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Machine learning with applications
Machine learning with applications Management Science and Operations Research, Artificial Intelligence, Computer Science Applications
自引率
0.00%
发文量
0
审稿时长
98 days
期刊最新文献
Document Layout Error Rate (DLER) metric to evaluate image segmentation methods Supervised machine learning for microbiomics: Bridging the gap between current and best practices Playing with words: Comparing the vocabulary and lexical diversity of ChatGPT and humans A survey on knowledge distillation: Recent advancements Texas rural land market integration: A causal analysis using machine learning applications
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1