This Item Might Reinforce Your Opinion: Obfuscation and Labeling of Search Results to Mitigate Confirmation Bias

Alisa Rieger, Tim Draws, M. Theune, N. Tintarev
{"title":"This Item Might Reinforce Your Opinion: Obfuscation and Labeling of Search Results to Mitigate Confirmation Bias","authors":"Alisa Rieger, Tim Draws, M. Theune, N. Tintarev","doi":"10.1145/3465336.3475101","DOIUrl":null,"url":null,"abstract":"During online information search, users tend to select search results that confirm previous beliefs and ignore competing possibilities. This systematic pattern in human behavior is known as confirmation bias. In this paper, we study the effect of obfuscation (i.e., hiding the result unless the user clicks on it) with warning labels and the effect of task on interaction with attitude-confirming search results. We conducted a preregistered, between-subjects crowdsourced user study (N=328) comparing six groups: three levels of obfuscation (targeted, random, none) and two levels of task (joint, two separate) for four debated topics. We found that both types of obfuscation influence user interactions, and in particular that targeted obfuscation helps decrease interaction with attitude-confirming search results. Future work is needed to understand how much of the observed effect is due to the strong influence of obfuscation, versus the warning label or the task design. We discuss design guidelines concerning system goals such as decreasing consumption of attitude-confirming search results, versus nudging users toward a more analytical mode of information processing. We also discuss implications for future work, such as the effects of interventions for confirmation bias mitigation over repeated exposure. We conclude with a strong word of caution: measures such as obfuscations should only be used for the benefit of the user, e.g., when they explicitly consent to mitigating their own biases.","PeriodicalId":325072,"journal":{"name":"Proceedings of the 32nd ACM Conference on Hypertext and Social Media","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 32nd ACM Conference on Hypertext and Social Media","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3465336.3475101","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22

Abstract

During online information search, users tend to select search results that confirm previous beliefs and ignore competing possibilities. This systematic pattern in human behavior is known as confirmation bias. In this paper, we study the effect of obfuscation (i.e., hiding the result unless the user clicks on it) with warning labels and the effect of task on interaction with attitude-confirming search results. We conducted a preregistered, between-subjects crowdsourced user study (N=328) comparing six groups: three levels of obfuscation (targeted, random, none) and two levels of task (joint, two separate) for four debated topics. We found that both types of obfuscation influence user interactions, and in particular that targeted obfuscation helps decrease interaction with attitude-confirming search results. Future work is needed to understand how much of the observed effect is due to the strong influence of obfuscation, versus the warning label or the task design. We discuss design guidelines concerning system goals such as decreasing consumption of attitude-confirming search results, versus nudging users toward a more analytical mode of information processing. We also discuss implications for future work, such as the effects of interventions for confirmation bias mitigation over repeated exposure. We conclude with a strong word of caution: measures such as obfuscations should only be used for the benefit of the user, e.g., when they explicitly consent to mitigating their own biases.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
这个项目可能会加强你的观点:混淆和标记搜索结果,以减轻确认偏差
在在线信息搜索过程中,用户倾向于选择确认先前信念的搜索结果,而忽略竞争的可能性。这种人类行为的系统模式被称为确认偏误。在本文中,我们研究了用警告标签混淆(即隐藏结果,除非用户点击它)的效果,以及任务对态度确认搜索结果交互的影响。我们进行了一项预先注册的、受试者之间的众包用户研究(N=328),比较了六组:三个级别的混淆(目标、随机、无)和两个级别的任务(联合、两个单独),涉及四个辩论主题。我们发现,这两种类型的混淆都会影响用户交互,特别是有针对性的混淆有助于减少与态度确认搜索结果的交互。未来的工作需要了解观察到的效果有多少是由于混淆的强烈影响,而不是警告标签或任务设计。我们讨论了有关系统目标的设计指南,例如减少对态度确认搜索结果的消耗,而不是推动用户转向更分析的信息处理模式。我们还讨论了对未来工作的影响,例如干预措施对反复暴露的确认偏倚缓解的影响。最后,我们强烈警告:混淆等措施只应该用于用户的利益,例如,当他们明确同意减轻自己的偏见时。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Demonstration of Weblinks: A Rich Linking Layer Over the Web Hate Speech in Political Discourse: A Case Study of UK MPs on Twitter International Teaching and Research in Hypertext Reductio ad absurdum?: From Analogue Hypertext to Digital Humanities RIP Emojis and Words to Contextualize Mourning on Twitter
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1