医学成像中的地面真相生成:基于众包的迭代方法

A. Foncubierta-Rodríguez, H. Müller
{"title":"医学成像中的地面真相生成:基于众包的迭代方法","authors":"A. Foncubierta-Rodríguez, H. Müller","doi":"10.1145/2390803.2390808","DOIUrl":null,"url":null,"abstract":"As in many other scientific domains where computer--based tools need to be evaluated, also medical imaging often requires the expensive generation of manual ground truth. For some specific tasks medical doctors can be required to guarantee high quality and valid results, whereas other tasks such as the image modality classification described in this text can in sufficiently high quality be performed with simple domain experts. Crowdsourcing has received much attention in many domains recently as volunteers perform so--called human intelligence tasks for often small amounts of money, allowing to reduce the cost of creating manually annotated data sets and ground truth in evaluation tasks. On the other hand there has often been a discussion on the quality when using unknown experts. Controlling task quality has remained one of the main challenges in crowdsourcing approaches as potentially the persons performing the tasks may not be interested in results quality but rather their payment.\n On the other hand several crowdsourcing platforms such as Crowdflower that we used allow creating interfaces and sharing them with only a limited number of known persons. The text describes the interfaces developed and the quality obtained through manual annotation of several domain experts and one medical doctor. Particularly the feedback loop of semi--automatic tools is explained. The results of an initial crowdsourcing round classifying medical images into a set of image categories were manually controlled by domain experts and then used to train an automatic system that visually classified these images. The automatic classification results were then used to manually confirm or refuse the automatic classes, reducing the time for the initial tasks.\n Crowdsourcing platforms allow creating a large variety of interfaces for judgements. Whether used among known experts or paying for unknown persons, they allow increasing the speed of ground truth creation and limit the amount of money to be paid.","PeriodicalId":429491,"journal":{"name":"CrowdMM '12","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"72","resultStr":"{\"title\":\"Ground truth generation in medical imaging: a crowdsourcing-based iterative approach\",\"authors\":\"A. Foncubierta-Rodríguez, H. Müller\",\"doi\":\"10.1145/2390803.2390808\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As in many other scientific domains where computer--based tools need to be evaluated, also medical imaging often requires the expensive generation of manual ground truth. For some specific tasks medical doctors can be required to guarantee high quality and valid results, whereas other tasks such as the image modality classification described in this text can in sufficiently high quality be performed with simple domain experts. Crowdsourcing has received much attention in many domains recently as volunteers perform so--called human intelligence tasks for often small amounts of money, allowing to reduce the cost of creating manually annotated data sets and ground truth in evaluation tasks. On the other hand there has often been a discussion on the quality when using unknown experts. Controlling task quality has remained one of the main challenges in crowdsourcing approaches as potentially the persons performing the tasks may not be interested in results quality but rather their payment.\\n On the other hand several crowdsourcing platforms such as Crowdflower that we used allow creating interfaces and sharing them with only a limited number of known persons. The text describes the interfaces developed and the quality obtained through manual annotation of several domain experts and one medical doctor. Particularly the feedback loop of semi--automatic tools is explained. The results of an initial crowdsourcing round classifying medical images into a set of image categories were manually controlled by domain experts and then used to train an automatic system that visually classified these images. The automatic classification results were then used to manually confirm or refuse the automatic classes, reducing the time for the initial tasks.\\n Crowdsourcing platforms allow creating a large variety of interfaces for judgements. Whether used among known experts or paying for unknown persons, they allow increasing the speed of ground truth creation and limit the amount of money to be paid.\",\"PeriodicalId\":429491,\"journal\":{\"name\":\"CrowdMM '12\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-10-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"72\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"CrowdMM '12\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2390803.2390808\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"CrowdMM '12","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2390803.2390808","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 72

摘要

正如许多其他科学领域需要评估基于计算机的工具一样,医学成像通常也需要昂贵的人工生成地面真相。对于某些特定任务,可以要求医生保证高质量和有效的结果,而其他任务,如本文中描述的图像模态分类,可以由简单的领域专家以足够高的质量执行。众包最近在许多领域受到了广泛关注,因为志愿者通常以很少的钱执行所谓的人类智能任务,从而减少了创建手动注释数据集和评估任务中基本事实的成本。另一方面,当使用未知专家时,经常会有关于质量的讨论。控制任务质量仍然是众包方法的主要挑战之一,因为执行任务的人可能对结果质量不感兴趣,而是对报酬感兴趣。另一方面,我们使用的一些众包平台(如Crowdflower)允许我们创建界面并与有限数量的已知人员共享。本文描述了通过几位领域专家和一位医生的手工注释所开发的接口和所获得的质量。特别说明了半自动刀具的反馈回路。将医学图像分类为一组图像类别的初始众包轮的结果由领域专家手动控制,然后用于训练对这些图像进行视觉分类的自动系统。然后使用自动分类结果手动确认或拒绝自动分类,减少初始任务的时间。众包平台允许创建各种各样的判断界面。无论是在已知的专家中使用,还是为不知名的人付费,它们都可以提高地面真理创造的速度,并限制支付的金额。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Ground truth generation in medical imaging: a crowdsourcing-based iterative approach
As in many other scientific domains where computer--based tools need to be evaluated, also medical imaging often requires the expensive generation of manual ground truth. For some specific tasks medical doctors can be required to guarantee high quality and valid results, whereas other tasks such as the image modality classification described in this text can in sufficiently high quality be performed with simple domain experts. Crowdsourcing has received much attention in many domains recently as volunteers perform so--called human intelligence tasks for often small amounts of money, allowing to reduce the cost of creating manually annotated data sets and ground truth in evaluation tasks. On the other hand there has often been a discussion on the quality when using unknown experts. Controlling task quality has remained one of the main challenges in crowdsourcing approaches as potentially the persons performing the tasks may not be interested in results quality but rather their payment. On the other hand several crowdsourcing platforms such as Crowdflower that we used allow creating interfaces and sharing them with only a limited number of known persons. The text describes the interfaces developed and the quality obtained through manual annotation of several domain experts and one medical doctor. Particularly the feedback loop of semi--automatic tools is explained. The results of an initial crowdsourcing round classifying medical images into a set of image categories were manually controlled by domain experts and then used to train an automatic system that visually classified these images. The automatic classification results were then used to manually confirm or refuse the automatic classes, reducing the time for the initial tasks. Crowdsourcing platforms allow creating a large variety of interfaces for judgements. Whether used among known experts or paying for unknown persons, they allow increasing the speed of ground truth creation and limit the amount of money to be paid.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A closer look at photographers' intentions: a test dataset Crowdsourced user interface testing for multimedia applications Ground truth generation in medical imaging: a crowdsourcing-based iterative approach PodCastle and songle: crowdsourcing-based web services for spoken content retrieval and active music listening Crowdsourcing micro-level multimedia annotations: the challenges of evaluation and interface
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1