Pushing the limits of mechanical turk: qualifying the crowd for video geo-location

L. Gottlieb, Jaeyoung Choi, P. Kelm, T. Sikora, G. Friedland
{"title":"Pushing the limits of mechanical turk: qualifying the crowd for video geo-location","authors":"L. Gottlieb, Jaeyoung Choi, P. Kelm, T. Sikora, G. Friedland","doi":"10.1145/2390803.2390815","DOIUrl":null,"url":null,"abstract":"In this article we review the methods we have developed for finding Mechanical Turk participants for the manual annotation of the geo-location of random videos from the web. We require high quality annotations for this project, as we are attempting to establish a human baseline for future comparison to machine systems. This task is different from a standard Mechanical Turk task in that it is difficult for both humans and machines, whereas a standard Mechanical Turk task is usually easy for humans and difficult or impossible for machines. This article discusses the varied difficulties we encountered while qualifying annotators and the steps that we took to select the individuals most likely to do well at our annotation task in the future.","PeriodicalId":429491,"journal":{"name":"CrowdMM '12","volume":"118 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"CrowdMM '12","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2390803.2390815","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16

Abstract

In this article we review the methods we have developed for finding Mechanical Turk participants for the manual annotation of the geo-location of random videos from the web. We require high quality annotations for this project, as we are attempting to establish a human baseline for future comparison to machine systems. This task is different from a standard Mechanical Turk task in that it is difficult for both humans and machines, whereas a standard Mechanical Turk task is usually easy for humans and difficult or impossible for machines. This article discusses the varied difficulties we encountered while qualifying annotators and the steps that we took to select the individuals most likely to do well at our annotation task in the future.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
突破土耳其机器人的极限:让人群具备视频地理定位的资格
在这篇文章中,我们回顾了我们开发的方法,用于从网络上随机视频的地理位置的手动注释中找到土耳其机械参与者。这个项目需要高质量的注释,因为我们正试图建立一个人类基线,以便将来与机器系统进行比较。这个任务不同于标准的Mechanical Turk任务,因为它对人类和机器来说都很困难,而标准的Mechanical Turk任务通常对人类来说很容易,对机器来说很难或不可能。本文讨论了我们在筛选注释者时遇到的各种困难,以及我们为选择将来最有可能完成注释任务的人所采取的步骤。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A closer look at photographers' intentions: a test dataset Crowdsourced user interface testing for multimedia applications Ground truth generation in medical imaging: a crowdsourcing-based iterative approach PodCastle and songle: crowdsourcing-based web services for spoken content retrieval and active music listening Crowdsourcing micro-level multimedia annotations: the challenges of evaluation and interface
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1