Human+AI Crowd Task Assignment Considering Result Quality Requirements

Masaki Kobayashi, Kei Wakabayashi, Atsuyuki Morishima
{"title":"Human+AI Crowd Task Assignment Considering Result Quality Requirements","authors":"Masaki Kobayashi, Kei Wakabayashi, Atsuyuki Morishima","doi":"10.1609/hcomp.v9i1.18943","DOIUrl":null,"url":null,"abstract":"This paper addresses the problem of dynamically assigning tasks to a crowd consisting of AI and human workers.\nCurrently, crowdsourcing the creation of AI programs is a common practice.\nTo apply such kinds of AI programs to the set of tasks, we often take the ``all-or-nothing'' approach that waits for the AI to be good enough.\nHowever, this approach may prevent us from exploiting the answers provided by the AI until the process is completed, and also prevents the exploration of different AI candidates.\nTherefore, integrating the created AI, both with other AIs and human computation, to obtain a more efficient human-AI team is not trivial.\nIn this paper, we propose a method that addresses these issues by adopting a ``divide-and-conquer'' strategy for AI worker evaluation.\nHere, the assignment is optimal when the number of task assignments to humans is minimal, as long as the final results satisfy a given quality requirement. \nThis paper presents some theoretical analyses of the proposed method and an extensive set of experiments conducted with open benchmarks and real-world datasets.\nThe results show that the algorithm can assign many more tasks than the baselines to AI when it is difficult for AIs to satisfy the quality requirement for the whole set of tasks. They also show that it can flexibly change the number of tasks assigned to multiple AI workers in accordance with the performance of the available AI workers.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/hcomp.v9i1.18943","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

This paper addresses the problem of dynamically assigning tasks to a crowd consisting of AI and human workers. Currently, crowdsourcing the creation of AI programs is a common practice. To apply such kinds of AI programs to the set of tasks, we often take the ``all-or-nothing'' approach that waits for the AI to be good enough. However, this approach may prevent us from exploiting the answers provided by the AI until the process is completed, and also prevents the exploration of different AI candidates. Therefore, integrating the created AI, both with other AIs and human computation, to obtain a more efficient human-AI team is not trivial. In this paper, we propose a method that addresses these issues by adopting a ``divide-and-conquer'' strategy for AI worker evaluation. Here, the assignment is optimal when the number of task assignments to humans is minimal, as long as the final results satisfy a given quality requirement. This paper presents some theoretical analyses of the proposed method and an extensive set of experiments conducted with open benchmarks and real-world datasets. The results show that the algorithm can assign many more tasks than the baselines to AI when it is difficult for AIs to satisfy the quality requirement for the whole set of tasks. They also show that it can flexibly change the number of tasks assigned to multiple AI workers in accordance with the performance of the available AI workers.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
考虑结果质量要求的人类+人工智能人群任务分配
本文解决了由人工智能和人类工人组成的人群动态分配任务的问题。目前,众包人工智能程序的创建是一种常见的做法。为了将这类人工智能程序应用于一系列任务,我们通常采取“要么全有,要么全无”的方法,等待人工智能变得足够好。然而,这种方法可能会阻止我们在过程完成之前利用AI提供的答案,并且还会阻止对不同AI候选对象的探索。因此,将创造出来的人工智能与其他人工智能和人类的计算相结合,以获得一个更高效的人类-人工智能团队并非易事。在本文中,我们提出了一种方法,通过采用“分而治之”策略来解决这些问题。在这里,只要最终结果满足给定的质量要求,当分配给人类的任务数量最少时,分配就是最优的。本文对所提出的方法进行了一些理论分析,并对开放基准和现实世界数据集进行了广泛的实验。结果表明,当人工智能难以满足整套任务的质量要求时,该算法可以为人工智能分配比基线更多的任务。他们还表明,它可以根据可用人工智能工作人员的表现灵活地改变分配给多个人工智能工作人员的任务数量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection Crowdsourced Clustering via Active Querying: Practical Algorithm with Theoretical Guarantees BackTrace: A Human-AI Collaborative Approach to Discovering Studio Backdrops in Historical Photographs Confidence Contours: Uncertainty-Aware Annotation for Medical Semantic Segmentation Humans Forgo Reward to Instill Fairness into AI
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1