Exploring the impact of automated correction of misinformation in social media

IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Ai Magazine Pub Date : 2024-06-04 DOI:10.1002/aaai.12180
Grégoire Burel, Mohammadali Tavakoli, Harith Alani
{"title":"Exploring the impact of automated correction of misinformation in social media","authors":"Grégoire Burel,&nbsp;Mohammadali Tavakoli,&nbsp;Harith Alani","doi":"10.1002/aaai.12180","DOIUrl":null,"url":null,"abstract":"<p>Correcting misinformation is a complex task, influenced by various psychological, social, and technical factors. Most research evaluation methods for identifying effective correction approaches tend to rely on either crowdsourcing, questionnaires, lab-based simulations, or hypothetical scenarios. However, the translation of these methods and findings into real-world settings, where individuals willingly and freely disseminate misinformation, remains largely unexplored. Consequently, we lack a comprehensive understanding of how individuals who share misinformation in natural online environments would respond to corrective interventions. In this study, we explore the effectiveness of corrective messaging on 3898 users who shared misinformation on Twitter/X over 2 years. We designed and deployed a bot to automatically identify individuals who share misinformation and subsequently alert them to related fact-checks in various message formats. Our analysis shows that only a small minority of users react positively to the corrective messages, with most users either ignoring them or reacting negatively. Nevertheless, we also found that more active users were proportionally more likely to react positively to corrections and we observed that different message tones made particular user groups more likely to react to the bot.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 2","pages":"227-245"},"PeriodicalIF":2.5000,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12180","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ai Magazine","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/aaai.12180","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Correcting misinformation is a complex task, influenced by various psychological, social, and technical factors. Most research evaluation methods for identifying effective correction approaches tend to rely on either crowdsourcing, questionnaires, lab-based simulations, or hypothetical scenarios. However, the translation of these methods and findings into real-world settings, where individuals willingly and freely disseminate misinformation, remains largely unexplored. Consequently, we lack a comprehensive understanding of how individuals who share misinformation in natural online environments would respond to corrective interventions. In this study, we explore the effectiveness of corrective messaging on 3898 users who shared misinformation on Twitter/X over 2 years. We designed and deployed a bot to automatically identify individuals who share misinformation and subsequently alert them to related fact-checks in various message formats. Our analysis shows that only a small minority of users react positively to the corrective messages, with most users either ignoring them or reacting negatively. Nevertheless, we also found that more active users were proportionally more likely to react positively to corrections and we observed that different message tones made particular user groups more likely to react to the bot.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
探索自动更正社交媒体错误信息的影响
纠正错误信息是一项复杂的任务,受到各种心理、社会和技术因素的影响。大多数用于确定有效纠正方法的研究评估方法往往依赖于众包、问卷调查、实验室模拟或假设情景。然而,如何将这些方法和研究结果应用到真实世界的环境中,即个人自愿、自由地传播错误信息的环境中,在很大程度上仍有待探索。因此,我们缺乏对在自然网络环境中分享错误信息的个人如何应对纠正干预措施的全面了解。在本研究中,我们探讨了纠正信息对 3898 名两年来在 Twitter/X 上分享错误信息的用户的有效性。我们设计并部署了一个机器人来自动识别分享错误信息的个人,并随后以各种信息格式提醒他们注意相关的事实核查。我们的分析表明,只有少数用户对纠正信息做出了积极反应,大多数用户要么置之不理,要么做出消极反应。不过,我们也发现,更活跃的用户更有可能对纠正信息做出积极反应,而且我们还观察到,不同的信息语调会使特定用户群更有可能对机器人做出反应。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Ai Magazine
Ai Magazine 工程技术-计算机:人工智能
CiteScore
3.90
自引率
11.10%
发文量
61
审稿时长
>12 weeks
期刊介绍: AI Magazine publishes original articles that are reasonably self-contained and aimed at a broad spectrum of the AI community. Technical content should be kept to a minimum. In general, the magazine does not publish articles that have been published elsewhere in whole or in part. The magazine welcomes the contribution of articles on the theory and practice of AI as well as general survey articles, tutorial articles on timely topics, conference or symposia or workshop reports, and timely columns on topics of interest to AI scientists.
期刊最新文献
Issue Information AI fairness in practice: Paradigm, challenges, and prospects Toward the confident deployment of real-world reinforcement learning agents Towards robust visual understanding: A paradigm shift in computer vision from recognition to reasoning Efficient and robust sequential decision making algorithms
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1