A Checklist to Combat Cognitive Biases in Crowdsourcing

Tim Draws, Alisa Rieger, O. Inel, U. Gadiraju, N. Tintarev
{"title":"A Checklist to Combat Cognitive Biases in Crowdsourcing","authors":"Tim Draws, Alisa Rieger, O. Inel, U. Gadiraju, N. Tintarev","doi":"10.1609/hcomp.v9i1.18939","DOIUrl":null,"url":null,"abstract":"Recent research has demonstrated that cognitive biases such as the confirmation bias or the anchoring effect can negatively affect the quality of crowdsourced data. In practice, however, such biases go unnoticed unless specifically assessed or controlled for. Task requesters need to ensure that task workflow and design choices do not trigger workers’ cognitive biases. Moreover, to facilitate the reuse of crowdsourced data collections, practitioners can benefit from understanding whether and which cognitive biases may be associated with the data. To this end, we propose a 12-item checklist adapted from business psychology to combat cognitive biases in crowdsourcing. We demonstrate the practical application of this checklist in a case study on viewpoint annotations for search results. Through a retrospective analysis of relevant crowdsourcing research that has been published at HCOMP in 2018, 2019, and 2020, we show that cognitive biases may often affect crowd workers but are typically not considered as potential sources of poor data quality. The checklist we propose is a practical tool that requesters can use to improve their task designs and appropriately describe potential limitations of collected data. It contributes to a body of efforts towards making human-labeled data more reliable and reusable.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"42","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/hcomp.v9i1.18939","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 42

Abstract

Recent research has demonstrated that cognitive biases such as the confirmation bias or the anchoring effect can negatively affect the quality of crowdsourced data. In practice, however, such biases go unnoticed unless specifically assessed or controlled for. Task requesters need to ensure that task workflow and design choices do not trigger workers’ cognitive biases. Moreover, to facilitate the reuse of crowdsourced data collections, practitioners can benefit from understanding whether and which cognitive biases may be associated with the data. To this end, we propose a 12-item checklist adapted from business psychology to combat cognitive biases in crowdsourcing. We demonstrate the practical application of this checklist in a case study on viewpoint annotations for search results. Through a retrospective analysis of relevant crowdsourcing research that has been published at HCOMP in 2018, 2019, and 2020, we show that cognitive biases may often affect crowd workers but are typically not considered as potential sources of poor data quality. The checklist we propose is a practical tool that requesters can use to improve their task designs and appropriately describe potential limitations of collected data. It contributes to a body of efforts towards making human-labeled data more reliable and reusable.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在众包中对抗认知偏见的清单
最近的研究表明,认知偏差,如确认偏差或锚定效应,会对众包数据的质量产生负面影响。然而,在实践中,除非特别评估或控制,否则这种偏差是不会被注意到的。任务请求者需要确保任务工作流和设计选择不会引发工作人员的认知偏差。此外,为了促进众包数据收集的重用,从业者可以从了解是否以及哪些认知偏差可能与数据相关中受益。为此,我们提出了一份根据商业心理学改编的12项清单,以对抗众包中的认知偏见。我们在一个关于搜索结果的视点注释的案例研究中演示了该检查表的实际应用。通过对2018年、2019年和2020年在HCOMP上发表的相关众包研究的回顾性分析,我们发现认知偏见可能经常影响众包工作者,但通常不被认为是数据质量差的潜在来源。我们提出的清单是一个实用的工具,请求者可以使用它来改进他们的任务设计,并适当地描述收集数据的潜在限制。它有助于使人类标记的数据更加可靠和可重用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection Crowdsourced Clustering via Active Querying: Practical Algorithm with Theoretical Guarantees BackTrace: A Human-AI Collaborative Approach to Discovering Studio Backdrops in Historical Photographs Confidence Contours: Uncertainty-Aware Annotation for Medical Semantic Segmentation Humans Forgo Reward to Instill Fairness into AI
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1