人工智能作为一个非政治裁判:使用替代来源来减少事实核查信息处理中的党派偏见

IF 5.2 1区 文学 Q1 COMMUNICATION Digital Journalism Pub Date : 2023-09-14 DOI:10.1080/21670811.2023.2254820
Myojung Chung, Won-Ki Moon, S. Mo Jones-Jang
{"title":"人工智能作为一个非政治裁判:使用替代来源来减少事实核查信息处理中的党派偏见","authors":"Myojung Chung, Won-Ki Moon, S. Mo Jones-Jang","doi":"10.1080/21670811.2023.2254820","DOIUrl":null,"url":null,"abstract":"AbstractWhile fact-checking has received much attention as a tool to fight misinformation online, fact-checking efforts have yielded limited success in combating political misinformation due to partisans’ biased information processing. The efficacy of fact-checking often decreases, if not backfires, when the fact-checking messages contradict individual audiences’ political stance. To explore ways to minimize such politically biased processing of fact-checking messages, an online experiment (N = 645) examined how different source labels of fact-checking messages (human experts vs. AI vs. crowdsourcing vs. human experts-AI hybrid) influence partisans’ processing of fact-checking messages. Results showed that AI and crowdsourcing source labels significantly reduced motivated reasoning in evaluating the credibility of fact-checking messages whereas the partisan bias remained evident for the human experts and human experts-AI hybrid source labels.Keywords: AIartificial intelligencefact-checkingmisinformationmessage credibilityfake newsmotivated reasoningsocial media Disclosure StatementNo potential conflict of interest was reported by the author(s).Notes1 A series of analysis of variance (ANOVA) and Chi-square tests found no significant demographic differences between conditions (p = .099 for age; p = .522 for gender; p = .417 for income; p = .364 for education; p = .549 for political partisanship; p = .153 for political ideology, p = .493 for frequency of social media use). Thus, randomization was deemed successful.2 To further explore differences in message credibility across the four fact-checking source labels, one-way ANOVA and a Bonferroni post hoc test were conducted. The results showed that there are significant differences across the four source labels in shaping message credibility, F(3, 641) = 2.82, p = .038, Cohen’s d = 0.23. Those in the AI condition reported the highest message credibility (M = 3.89, SD = 0.79), followed by the human experts condition (M = 3.86, SD = 0.89) and the human experts-AI condition (M = 3.84, SD = 0.81). The crowdsourcing condition showed the lowest message credibility (M = 3.66, SD = 0.81). The post hoc test indicated that the AI source label induced significantly higher message credibility than the crowdsourcing source label (p = .042). However, no significant differences were found among other source labels.","PeriodicalId":11166,"journal":{"name":"Digital Journalism","volume":"219 1","pages":"0"},"PeriodicalIF":5.2000,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI as an Apolitical Referee: Using Alternative Sources to Decrease Partisan Biases in the Processing of Fact-Checking Messages\",\"authors\":\"Myojung Chung, Won-Ki Moon, S. Mo Jones-Jang\",\"doi\":\"10.1080/21670811.2023.2254820\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"AbstractWhile fact-checking has received much attention as a tool to fight misinformation online, fact-checking efforts have yielded limited success in combating political misinformation due to partisans’ biased information processing. The efficacy of fact-checking often decreases, if not backfires, when the fact-checking messages contradict individual audiences’ political stance. To explore ways to minimize such politically biased processing of fact-checking messages, an online experiment (N = 645) examined how different source labels of fact-checking messages (human experts vs. AI vs. crowdsourcing vs. human experts-AI hybrid) influence partisans’ processing of fact-checking messages. Results showed that AI and crowdsourcing source labels significantly reduced motivated reasoning in evaluating the credibility of fact-checking messages whereas the partisan bias remained evident for the human experts and human experts-AI hybrid source labels.Keywords: AIartificial intelligencefact-checkingmisinformationmessage credibilityfake newsmotivated reasoningsocial media Disclosure StatementNo potential conflict of interest was reported by the author(s).Notes1 A series of analysis of variance (ANOVA) and Chi-square tests found no significant demographic differences between conditions (p = .099 for age; p = .522 for gender; p = .417 for income; p = .364 for education; p = .549 for political partisanship; p = .153 for political ideology, p = .493 for frequency of social media use). Thus, randomization was deemed successful.2 To further explore differences in message credibility across the four fact-checking source labels, one-way ANOVA and a Bonferroni post hoc test were conducted. The results showed that there are significant differences across the four source labels in shaping message credibility, F(3, 641) = 2.82, p = .038, Cohen’s d = 0.23. Those in the AI condition reported the highest message credibility (M = 3.89, SD = 0.79), followed by the human experts condition (M = 3.86, SD = 0.89) and the human experts-AI condition (M = 3.84, SD = 0.81). The crowdsourcing condition showed the lowest message credibility (M = 3.66, SD = 0.81). The post hoc test indicated that the AI source label induced significantly higher message credibility than the crowdsourcing source label (p = .042). However, no significant differences were found among other source labels.\",\"PeriodicalId\":11166,\"journal\":{\"name\":\"Digital Journalism\",\"volume\":\"219 1\",\"pages\":\"0\"},\"PeriodicalIF\":5.2000,\"publicationDate\":\"2023-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Journalism\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/21670811.2023.2254820\",\"RegionNum\":1,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMMUNICATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Journalism","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/21670811.2023.2254820","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMMUNICATION","Score":null,"Total":0}
引用次数: 0

摘要

摘要虽然事实核查作为打击网络错误信息的工具受到了广泛关注,但由于党派偏见的信息处理,事实核查工作在打击政治错误信息方面取得的成功有限。当事实核查的信息与个别受众的政治立场相矛盾时,事实核查的效力往往会下降,如果不是适得其反的话。为了探索最小化这种对事实核查信息的政治偏见处理的方法,一项在线实验(N = 645)研究了事实核查信息的不同来源标签(人类专家与人工智能、众包与人类专家-人工智能混合)如何影响党派对事实核查信息的处理。结果表明,人工智能和众包来源标签显著降低了评估事实核查信息可信度的动机推理,而人类专家和人类专家-人工智能混合来源标签的党派偏见仍然很明显。关键词:人工智能事实核查错误信息可信度假新闻动机推理社交媒体披露声明作者未报告潜在利益冲突注1:一系列方差分析(ANOVA)和卡方检验发现,不同条件之间没有显著的人口统计学差异(年龄p = 0.099;性别P = .522;收入P = 0.417;教育的P = .364;政党关系的P = .549;政治意识形态P = .153,社交媒体使用频率P = .493)。因此,随机化被认为是成功的为了进一步探讨四个事实核查源标签在信息可信度方面的差异,我们进行了单向方差分析和Bonferroni事后检验。结果显示,四个源标签在塑造信息可信度方面存在显著差异,F(3,641) = 2.82, p = 0.038, Cohen 's d = 0.23。人工智能组的信息可信度最高(M = 3.89, SD = 0.79),其次是人类专家组(M = 3.86, SD = 0.89)和人类专家-人工智能组(M = 3.84, SD = 0.81)。众包条件下信息可信度最低(M = 3.66, SD = 0.81)。事后检验表明,人工智能源标签诱导的消息可信度显著高于众包源标签(p = 0.042)。然而,在其他来源标签之间没有发现显著差异。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
AI as an Apolitical Referee: Using Alternative Sources to Decrease Partisan Biases in the Processing of Fact-Checking Messages
AbstractWhile fact-checking has received much attention as a tool to fight misinformation online, fact-checking efforts have yielded limited success in combating political misinformation due to partisans’ biased information processing. The efficacy of fact-checking often decreases, if not backfires, when the fact-checking messages contradict individual audiences’ political stance. To explore ways to minimize such politically biased processing of fact-checking messages, an online experiment (N = 645) examined how different source labels of fact-checking messages (human experts vs. AI vs. crowdsourcing vs. human experts-AI hybrid) influence partisans’ processing of fact-checking messages. Results showed that AI and crowdsourcing source labels significantly reduced motivated reasoning in evaluating the credibility of fact-checking messages whereas the partisan bias remained evident for the human experts and human experts-AI hybrid source labels.Keywords: AIartificial intelligencefact-checkingmisinformationmessage credibilityfake newsmotivated reasoningsocial media Disclosure StatementNo potential conflict of interest was reported by the author(s).Notes1 A series of analysis of variance (ANOVA) and Chi-square tests found no significant demographic differences between conditions (p = .099 for age; p = .522 for gender; p = .417 for income; p = .364 for education; p = .549 for political partisanship; p = .153 for political ideology, p = .493 for frequency of social media use). Thus, randomization was deemed successful.2 To further explore differences in message credibility across the four fact-checking source labels, one-way ANOVA and a Bonferroni post hoc test were conducted. The results showed that there are significant differences across the four source labels in shaping message credibility, F(3, 641) = 2.82, p = .038, Cohen’s d = 0.23. Those in the AI condition reported the highest message credibility (M = 3.89, SD = 0.79), followed by the human experts condition (M = 3.86, SD = 0.89) and the human experts-AI condition (M = 3.84, SD = 0.81). The crowdsourcing condition showed the lowest message credibility (M = 3.66, SD = 0.81). The post hoc test indicated that the AI source label induced significantly higher message credibility than the crowdsourcing source label (p = .042). However, no significant differences were found among other source labels.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Digital Journalism
Digital Journalism COMMUNICATION-
CiteScore
11.20
自引率
24.10%
发文量
103
期刊介绍: Digital Journalism provides a critical forum for scholarly discussion, analysis and responses to the wide ranging implications of digital technologies, along with economic, political and cultural developments, for the practice and study of journalism. Radical shifts in journalism are changing every aspect of the production, content and reception of news; and at a dramatic pace which has transformed ‘new media’ into ‘legacy media’ in barely a decade. These crucial changes challenge traditional assumptions in journalism practice, scholarship and education, make definitional boundaries fluid and require reassessment of even the most fundamental questions such as "What is journalism?" and "Who is a journalist?" Digital Journalism pursues a significant and exciting editorial agenda including: Digital media and the future of journalism; Social media as sources and drivers of news; The changing ‘places’ and ‘spaces’ of news production and consumption in the context of digital media; News on the move and mobile telephony; The personalisation of news; Business models for funding digital journalism in the digital economy; Developments in data journalism and data visualisation; New research methods to analyse and explore digital journalism; Hyperlocalism and new understandings of community journalism; Changing relationships between journalists, sources and audiences; Citizen and participatory journalism; Machine written news and the automation of journalism; The history and evolution of online journalism; Changing journalism ethics in a digital setting; New challenges and directions for journalism education and training; Digital journalism, protest and democracy; Journalists’ changing role perceptions; Wikileaks and novel forms of investigative journalism.
期刊最新文献
Why Infrastructure Studies for Journalism? People, Power, Platforms and the Business of Journalism The Impact of Journalistic Cultures on Social Media Discourse: US Primary Debates in Cross-Lingual Online Spaces Journalism as a Service: How Tablet News Service Influences Subscriber Retention and Long-Term Profitability The Harming and the Helping: Perceived Organizational Effects on Mental Health in the Newsroom
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1