间质性肺疾病纹理交互分类优化策略

Q1 Computer Science Frontiers in ICT Pub Date : 2016-12-27 DOI:10.3389/fict.2016.00033
Thessa T. J. P. Kockelkorn, Rui Ramos, José Ramos, P. A. Jong, C. Schaefer-Prokop, R. Wittenberg, A. Tiehuis, J. Grutters, M. Viergever, B. Ginneken
{"title":"间质性肺疾病纹理交互分类优化策略","authors":"Thessa T. J. P. Kockelkorn, Rui Ramos, José Ramos, P. A. Jong, C. Schaefer-Prokop, R. Wittenberg, A. Tiehuis, J. Grutters, M. Viergever, B. Ginneken","doi":"10.3389/fict.2016.00033","DOIUrl":null,"url":null,"abstract":"For computerized analysis of textures in interstitial lung disease, manual annotations of lung tissue are necessary. Since making these annotations is labor-intensive, we previously proposed an interactive annotation framework. In this framework, observers iteratively trained a classifier to distinguish the different texture types by correcting its classification errors. In this work, we investigated three ways to extend this approach, in order to decrease the amount of user interaction required to annotate all lung tissue in a CT scan. First, we conducted automatic classification experiments to test how data from previously annotated scans can be used for classification of the scan under consideration. We compared the performance of a classifier trained on data from one observer, a classifier trained on data from multiple observers, a classifier trained on consensus training data, and an ensemble of classifiers, each trained on data from different sources. Experiments were conducted without and with texture selection. In the former case, training data from all 8 textures was used. In the latter, only training data from the texture types present in the scan were used, and the observer would have to indicate textures contained in the scan to be analyzed. Second, we simulated interactive annotation to test the effects of (1) asking observers to perform texture selection before the start of annotation, (2) the use of a classifier trained on data from previously annotated scans at the start of annotation, when the interactive classifier is untrained, and (3) allowing observers to choose which interactive or automatic classification results they wanted to correct. Finally, various strategies for selecting the classification results that were presented to the observer were considered. Classification accuracies for all possible interactive annotation scenarios were compared. Using the best performing protocol, in which observers select the textures that should be distinguished in the scan and in which they can choose which classification results to use for correction, a median accuracy of 88% was reached. The results obtained using this protocol were significantly better than results obtained with other interactive or automatic classification protocols.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"11 1","pages":"33"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Optimization Strategies for Interactive Classification of Interstitial Lung Disease Textures\",\"authors\":\"Thessa T. J. P. Kockelkorn, Rui Ramos, José Ramos, P. A. Jong, C. Schaefer-Prokop, R. Wittenberg, A. Tiehuis, J. Grutters, M. Viergever, B. Ginneken\",\"doi\":\"10.3389/fict.2016.00033\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"For computerized analysis of textures in interstitial lung disease, manual annotations of lung tissue are necessary. Since making these annotations is labor-intensive, we previously proposed an interactive annotation framework. In this framework, observers iteratively trained a classifier to distinguish the different texture types by correcting its classification errors. In this work, we investigated three ways to extend this approach, in order to decrease the amount of user interaction required to annotate all lung tissue in a CT scan. First, we conducted automatic classification experiments to test how data from previously annotated scans can be used for classification of the scan under consideration. We compared the performance of a classifier trained on data from one observer, a classifier trained on data from multiple observers, a classifier trained on consensus training data, and an ensemble of classifiers, each trained on data from different sources. Experiments were conducted without and with texture selection. In the former case, training data from all 8 textures was used. In the latter, only training data from the texture types present in the scan were used, and the observer would have to indicate textures contained in the scan to be analyzed. Second, we simulated interactive annotation to test the effects of (1) asking observers to perform texture selection before the start of annotation, (2) the use of a classifier trained on data from previously annotated scans at the start of annotation, when the interactive classifier is untrained, and (3) allowing observers to choose which interactive or automatic classification results they wanted to correct. Finally, various strategies for selecting the classification results that were presented to the observer were considered. Classification accuracies for all possible interactive annotation scenarios were compared. Using the best performing protocol, in which observers select the textures that should be distinguished in the scan and in which they can choose which classification results to use for correction, a median accuracy of 88% was reached. The results obtained using this protocol were significantly better than results obtained with other interactive or automatic classification protocols.\",\"PeriodicalId\":37157,\"journal\":{\"name\":\"Frontiers in ICT\",\"volume\":\"11 1\",\"pages\":\"33\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-12-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in ICT\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/fict.2016.00033\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in ICT","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fict.2016.00033","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0

摘要

对于间质性肺疾病的组织结构进行计算机分析,需要手工注释肺组织。由于制作这些注释需要大量的劳动,我们之前提出了一个交互式注释框架。在该框架中,观察者迭代训练分类器,通过修正分类错误来区分不同的纹理类型。在这项工作中,我们研究了三种方法来扩展这种方法,以减少在CT扫描中注释所有肺组织所需的用户交互量。首先,我们进行了自动分类实验,以测试如何使用先前标注的扫描数据对考虑的扫描进行分类。我们比较了在一个观察者数据上训练的分类器,在多个观察者数据上训练的分类器,在共识训练数据上训练的分类器,以及在不同来源的数据上训练的分类器集合的性能。在没有纹理选择和纹理选择的情况下进行了实验。在前一种情况下,使用所有8种纹理的训练数据。在后者中,只使用扫描中存在的纹理类型的训练数据,并且观察者必须指出扫描中包含的纹理以进行分析。其次,我们模拟了交互式标注,以测试(1)要求观察者在标注开始前进行纹理选择,(2)在标注开始时使用未经训练的交互式分类器,以及(3)允许观察者选择他们想要纠正的交互式或自动分类结果的效果。最后,考虑了各种选择呈现给观察者的分类结果的策略。比较了所有可能的交互注释场景的分类精度。使用性能最好的协议,观察者选择扫描中应该区分的纹理,他们可以选择使用哪些分类结果进行校正,中位数准确率达到88%。使用该协议获得的结果明显优于其他交互或自动分类协议获得的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Optimization Strategies for Interactive Classification of Interstitial Lung Disease Textures
For computerized analysis of textures in interstitial lung disease, manual annotations of lung tissue are necessary. Since making these annotations is labor-intensive, we previously proposed an interactive annotation framework. In this framework, observers iteratively trained a classifier to distinguish the different texture types by correcting its classification errors. In this work, we investigated three ways to extend this approach, in order to decrease the amount of user interaction required to annotate all lung tissue in a CT scan. First, we conducted automatic classification experiments to test how data from previously annotated scans can be used for classification of the scan under consideration. We compared the performance of a classifier trained on data from one observer, a classifier trained on data from multiple observers, a classifier trained on consensus training data, and an ensemble of classifiers, each trained on data from different sources. Experiments were conducted without and with texture selection. In the former case, training data from all 8 textures was used. In the latter, only training data from the texture types present in the scan were used, and the observer would have to indicate textures contained in the scan to be analyzed. Second, we simulated interactive annotation to test the effects of (1) asking observers to perform texture selection before the start of annotation, (2) the use of a classifier trained on data from previously annotated scans at the start of annotation, when the interactive classifier is untrained, and (3) allowing observers to choose which interactive or automatic classification results they wanted to correct. Finally, various strategies for selecting the classification results that were presented to the observer were considered. Classification accuracies for all possible interactive annotation scenarios were compared. Using the best performing protocol, in which observers select the textures that should be distinguished in the scan and in which they can choose which classification results to use for correction, a median accuracy of 88% was reached. The results obtained using this protocol were significantly better than results obtained with other interactive or automatic classification protocols.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Frontiers in ICT
Frontiers in ICT Computer Science-Computer Networks and Communications
自引率
0.00%
发文量
0
期刊最新文献
Project Westdrive: Unity City With Self-Driving Cars and Pedestrians for Virtual Reality Studies The Syncopated Energy Algorithm for Rendering Real-Time Tactile Interactions Dyadic Interference Leads to Area of Uncertainty During Face-to-Face Cooperative Interception Task Eyelid and Pupil Landmark Detection and Blink Estimation Based on Deformable Shape Models for Near-Field Infrared Video Toward Industry 4.0 With IoT: Optimizing Business Processes in an Evolving Manufacturing Factory
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1