J. Hipp, Deepti Adlakha, R. Gernes, A. Kargol, Robert Pless
{"title":"你看到我看到的了吗:拍摄场景的众包注释","authors":"J. Hipp, Deepti Adlakha, R. Gernes, A. Kargol, Robert Pless","doi":"10.1145/2526667.2526671","DOIUrl":null,"url":null,"abstract":"The Archive of Many Outdoor Scenes has captured 400 million images. Many of these cameras and images are of street intersections, a subset of which has experienced built environment improvements during the past seven years. We identified six cameras in Washington, DC, and uploaded 120 images from each before a built environment change (2007) and after (2010) to the crowdsourcing website Amazon Mechanical Turk (n=1,440). Five unique MTurk workers annotated each image, counting the number of pedestrians, cyclists, and vehicles. Two trained Research Assistants completed the same tasks. Reliability and validity statistics of MTurk workers revealed substantial agreement in annotating captured images of pedestrians and vehicles. Using the mean annotation of four MTurk workers proved most parsimonious for valid results. Crowdsourcing was shown to be a reliable and valid workforce for annotating images of outdoor human behavior.","PeriodicalId":124821,"journal":{"name":"International SenseCam & Pervasive Imaging Conference","volume":"51 8","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":"{\"title\":\"Do you see what I see: crowdsource annotation of captured scenes\",\"authors\":\"J. Hipp, Deepti Adlakha, R. Gernes, A. Kargol, Robert Pless\",\"doi\":\"10.1145/2526667.2526671\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The Archive of Many Outdoor Scenes has captured 400 million images. Many of these cameras and images are of street intersections, a subset of which has experienced built environment improvements during the past seven years. We identified six cameras in Washington, DC, and uploaded 120 images from each before a built environment change (2007) and after (2010) to the crowdsourcing website Amazon Mechanical Turk (n=1,440). Five unique MTurk workers annotated each image, counting the number of pedestrians, cyclists, and vehicles. Two trained Research Assistants completed the same tasks. Reliability and validity statistics of MTurk workers revealed substantial agreement in annotating captured images of pedestrians and vehicles. Using the mean annotation of four MTurk workers proved most parsimonious for valid results. Crowdsourcing was shown to be a reliable and valid workforce for annotating images of outdoor human behavior.\",\"PeriodicalId\":124821,\"journal\":{\"name\":\"International SenseCam & Pervasive Imaging Conference\",\"volume\":\"51 8\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-11-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"14\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International SenseCam & Pervasive Imaging Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2526667.2526671\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International SenseCam & Pervasive Imaging Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2526667.2526671","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14
摘要
许多户外场景档案已经拍摄了4亿张照片。这些摄像机和图像中的许多都是十字路口的,其中一部分在过去七年中经历了建筑环境的改善。我们确定了华盛顿特区的六台摄像机,并将每台摄像机在建筑环境变化之前(2007年)和之后(2010年)拍摄的120张照片上传到众包网站Amazon Mechanical Turk (n=1,440)。五个独特的MTurk工作人员对每张图像进行注释,计算行人、骑自行车的人和车辆的数量。两名训练有素的研究助理完成了同样的任务。MTurk工作人员的信度和效度统计显示,在注释捕获的行人和车辆图像方面存在实质性的一致性。使用四个MTurk工人的平均注释证明了最节省的有效结果。众包被证明是一种可靠和有效的劳动力,用于注释户外人类行为的图像。
Do you see what I see: crowdsource annotation of captured scenes
The Archive of Many Outdoor Scenes has captured 400 million images. Many of these cameras and images are of street intersections, a subset of which has experienced built environment improvements during the past seven years. We identified six cameras in Washington, DC, and uploaded 120 images from each before a built environment change (2007) and after (2010) to the crowdsourcing website Amazon Mechanical Turk (n=1,440). Five unique MTurk workers annotated each image, counting the number of pedestrians, cyclists, and vehicles. Two trained Research Assistants completed the same tasks. Reliability and validity statistics of MTurk workers revealed substantial agreement in annotating captured images of pedestrians and vehicles. Using the mean annotation of four MTurk workers proved most parsimonious for valid results. Crowdsourcing was shown to be a reliable and valid workforce for annotating images of outdoor human behavior.