SignUpCrowd:使用手语作为微任务众包的输入方式

Aayush Singh, Sebastian Wehkamp, U. Gadiraju
{"title":"SignUpCrowd:使用手语作为微任务众包的输入方式","authors":"Aayush Singh, Sebastian Wehkamp, U. Gadiraju","doi":"10.1609/hcomp.v10i1.21998","DOIUrl":null,"url":null,"abstract":"Different input modalities have been proposed and employed in technological landscapes like microtask crowdsourcing. However, sign language remains an input modality that has received little attention. Despite the fact that thousands of people around the world primarily use sign language, very little has been done to include them in such technological landscapes. We aim to address this gap and take a step towards the inclusion of deaf and mute people in microtask crowdsourcing. We first identify various microtasks which can be adapted to use sign language as input, while elucidating the challenges it introduces. We built a system called ‘SignUpCrowd’ that can be used to support sign language input for microtask crowdsourcing. We carried out a between-subjects study (N=240) to understand the effectiveness of sign language as an input modality for microtask crowdsourcing in comparison to prevalent textual and click input modalities. We explored this through the lens of visual question answering and sentiment analysis tasks by recruiting workers from the Prolific crowdsourcing platform. Our results indicate that sign language as an input modality in microtask crowdsourcing is comparable to the prevalent standards of using text and click input. Although people with no knowledge of sign language found it difficult to use, this input modality has the potential to broaden participation in crowd work. We highlight evidence suggesting the scope for sign language as a viable input type for microtask crowdsourcing. Our findings pave the way for further research to introduce sign language in real-world applications and create an inclusive technological landscape that more people can benefit from.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"SignUpCrowd: Using Sign-Language as an Input Modality for Microtask Crowdsourcing\",\"authors\":\"Aayush Singh, Sebastian Wehkamp, U. Gadiraju\",\"doi\":\"10.1609/hcomp.v10i1.21998\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Different input modalities have been proposed and employed in technological landscapes like microtask crowdsourcing. However, sign language remains an input modality that has received little attention. Despite the fact that thousands of people around the world primarily use sign language, very little has been done to include them in such technological landscapes. We aim to address this gap and take a step towards the inclusion of deaf and mute people in microtask crowdsourcing. We first identify various microtasks which can be adapted to use sign language as input, while elucidating the challenges it introduces. We built a system called ‘SignUpCrowd’ that can be used to support sign language input for microtask crowdsourcing. We carried out a between-subjects study (N=240) to understand the effectiveness of sign language as an input modality for microtask crowdsourcing in comparison to prevalent textual and click input modalities. We explored this through the lens of visual question answering and sentiment analysis tasks by recruiting workers from the Prolific crowdsourcing platform. Our results indicate that sign language as an input modality in microtask crowdsourcing is comparable to the prevalent standards of using text and click input. Although people with no knowledge of sign language found it difficult to use, this input modality has the potential to broaden participation in crowd work. We highlight evidence suggesting the scope for sign language as a viable input type for microtask crowdsourcing. Our findings pave the way for further research to introduce sign language in real-world applications and create an inclusive technological landscape that more people can benefit from.\",\"PeriodicalId\":87339,\"journal\":{\"name\":\"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1609/hcomp.v10i1.21998\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/hcomp.v10i1.21998","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

不同的输入模式已经被提出并应用于微任务众包等技术领域。然而,手语仍然是一种很少受到关注的输入方式。尽管世界上有成千上万的人主要使用手语,但在将他们纳入这种技术领域方面做得很少。我们的目标是解决这一差距,并朝着将聋哑人纳入微任务众包的方向迈出一步。我们首先确定了可以适应使用手语作为输入的各种微任务,同时阐明了它带来的挑战。我们建立了一个名为“SignUpCrowd”的系统,它可以用于支持微任务众包的手语输入。我们进行了一项受试者间研究(N=240),以了解手语作为微任务众包输入方式的有效性,并将其与流行的文本和点击输入方式进行比较。我们从多产的众包平台上招募员工,通过视觉问答和情感分析任务来探索这个问题。我们的研究结果表明,在微任务众包中,手语作为一种输入方式与使用文本和点击输入的普遍标准相当。虽然不懂手语的人很难使用手语,但这种输入方式有可能扩大对群体工作的参与。我们强调的证据表明,手语作为微任务众包的可行输入类型的范围。我们的研究结果为进一步研究手语在现实世界中的应用铺平了道路,并创造了一个包容性的技术环境,让更多的人从中受益。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
SignUpCrowd: Using Sign-Language as an Input Modality for Microtask Crowdsourcing
Different input modalities have been proposed and employed in technological landscapes like microtask crowdsourcing. However, sign language remains an input modality that has received little attention. Despite the fact that thousands of people around the world primarily use sign language, very little has been done to include them in such technological landscapes. We aim to address this gap and take a step towards the inclusion of deaf and mute people in microtask crowdsourcing. We first identify various microtasks which can be adapted to use sign language as input, while elucidating the challenges it introduces. We built a system called ‘SignUpCrowd’ that can be used to support sign language input for microtask crowdsourcing. We carried out a between-subjects study (N=240) to understand the effectiveness of sign language as an input modality for microtask crowdsourcing in comparison to prevalent textual and click input modalities. We explored this through the lens of visual question answering and sentiment analysis tasks by recruiting workers from the Prolific crowdsourcing platform. Our results indicate that sign language as an input modality in microtask crowdsourcing is comparable to the prevalent standards of using text and click input. Although people with no knowledge of sign language found it difficult to use, this input modality has the potential to broaden participation in crowd work. We highlight evidence suggesting the scope for sign language as a viable input type for microtask crowdsourcing. Our findings pave the way for further research to introduce sign language in real-world applications and create an inclusive technological landscape that more people can benefit from.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection Crowdsourced Clustering via Active Querying: Practical Algorithm with Theoretical Guarantees BackTrace: A Human-AI Collaborative Approach to Discovering Studio Backdrops in Historical Photographs Confidence Contours: Uncertainty-Aware Annotation for Medical Semantic Segmentation Humans Forgo Reward to Instill Fairness into AI
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1