The Effect of Performance-Based Compensation on Crowdsourced Human-Robot Interaction Experiments

Zahra Rezaei Khavas, Monish Reddy Kotturu, Russell Purkins, S. Ahmadzadeh, P. Robinette
{"title":"The Effect of Performance-Based Compensation on Crowdsourced Human-Robot Interaction Experiments","authors":"Zahra Rezaei Khavas, Monish Reddy Kotturu, Russell Purkins, S. Ahmadzadeh, P. Robinette","doi":"10.17706/jsw.18.3.117-129","DOIUrl":null,"url":null,"abstract":": Social scientists have long been interested in the relationship between financial incentives and performance. This subject has gained new relevance with the advent of web-based “ crowd-sourcing ” models of production. In recent decades, recruiting participants from crowd-sourcing platforms has gained considerable popularity among human-robot trust researchers. A large number of outliers due to lack of enough attention and focus of the experiment participants or due to participants' boredom and low engagement, especially in crowd-sourcing experiments, has always been a concern for researchers in the field of human-robot interaction. To overcome this problem, financial incentives can be a solution. In this study, we examine the effects of performance-related compensation on the experiment data quality, participants' performance, and accuracy in performing the assigned task, and experiment results in the context of a human-robot trust experiment. We designed an online human-robot collaborative search task and recruited 120 participants for this experiment from Amazon's Mechanical Turk (AMT). We tested participants' attention, performance, and trust in the robotic teammate under two conditions: constant payment and performance-based payment conditions. We found that using financial incentives can increase the data quality and help prevent the random and aimless behavior of the participants. We also found that financial incentives can improve the participants' performance significantly and causes participants to put more effort into performing the assigned task. However, it does not affect the experiment results unless any measures are directly associated with the compensation value.","PeriodicalId":11452,"journal":{"name":"e Informatica Softw. Eng. J.","volume":"33 1","pages":"117-129"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"e Informatica Softw. Eng. J.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.17706/jsw.18.3.117-129","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

: Social scientists have long been interested in the relationship between financial incentives and performance. This subject has gained new relevance with the advent of web-based “ crowd-sourcing ” models of production. In recent decades, recruiting participants from crowd-sourcing platforms has gained considerable popularity among human-robot trust researchers. A large number of outliers due to lack of enough attention and focus of the experiment participants or due to participants' boredom and low engagement, especially in crowd-sourcing experiments, has always been a concern for researchers in the field of human-robot interaction. To overcome this problem, financial incentives can be a solution. In this study, we examine the effects of performance-related compensation on the experiment data quality, participants' performance, and accuracy in performing the assigned task, and experiment results in the context of a human-robot trust experiment. We designed an online human-robot collaborative search task and recruited 120 participants for this experiment from Amazon's Mechanical Turk (AMT). We tested participants' attention, performance, and trust in the robotic teammate under two conditions: constant payment and performance-based payment conditions. We found that using financial incentives can increase the data quality and help prevent the random and aimless behavior of the participants. We also found that financial incentives can improve the participants' performance significantly and causes participants to put more effort into performing the assigned task. However, it does not affect the experiment results unless any measures are directly associated with the compensation value.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于绩效的薪酬对众包人机交互实验的影响
当前位置长期以来,社会科学家一直对经济激励与业绩之间的关系感兴趣。随着基于网络的“众包”生产模式的出现,这一主题获得了新的相关性。近几十年来,从众包平台招募参与者在人机信任研究人员中获得了相当大的人气。特别是在众包实验中,由于实验参与者缺乏足够的关注和关注,或者由于参与者的无聊和低参与度,导致大量的异常值,一直是人机交互领域研究人员关注的问题。为了克服这个问题,财政激励可能是一个解决方案。在本研究中,我们在人机信任实验的背景下,研究了与绩效相关的薪酬对实验数据质量、参与者在执行指定任务时的表现和准确性以及实验结果的影响。我们设计了一个在线人机协作搜索任务,并从亚马逊的土耳其机器人(AMT)中招募了120名参与者。我们在两种情况下测试了参与者对机器人队友的注意力、表现和信任:持续支付和基于表现的支付条件。我们发现,使用财务激励可以提高数据质量,并有助于防止参与者的随机和无目的行为。我们还发现,财务激励可以显著提高参与者的绩效,并使参与者更努力地完成分配的任务。但是,除非有任何措施与补偿值直接相关,否则它不会影响实验结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The Effect of Performance-Based Compensation on Crowdsourced Human-Robot Interaction Experiments Accelerated Defect Discovery and Reliability Improvement through Risk-Prioritized Testing for Web Applications Idle Strategy of Smart Monkey to Enhance Testing Operable GUI Regions Accessibility-as-a-Service an Open-Source Reading Assistive Tool for Education Research on the Application of SAE ARP 4754A in the Development of High-Speed Trains
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1