Learning to Localize Cross-Anatomy Landmarks in X-Ray Images with a Universal Model.

IF 5 Q1 ENGINEERING, BIOMEDICAL BME frontiers Pub Date : 2022-06-08 eCollection Date: 2022-01-01 DOI:10.34133/2022/9765095
Heqin Zhu, Qingsong Yao, Li Xiao, S Kevin Zhou
{"title":"Learning to Localize Cross-Anatomy Landmarks in X-Ray Images with a Universal Model.","authors":"Heqin Zhu, Qingsong Yao, Li Xiao, S Kevin Zhou","doi":"10.34133/2022/9765095","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objective and Impact Statement</i>. In this work, we develop a universal anatomical landmark detection model which learns once from multiple datasets corresponding to different anatomical regions. Compared with the conventional model trained on a single dataset, this universal model not only is more light weighted and easier to train but also improves the accuracy of the anatomical landmark location. <i>Introduction</i>. The accurate and automatic localization of anatomical landmarks plays an essential role in medical image analysis. However, recent deep learning-based methods only utilize limited data from a single dataset. It is promising and desirable to build a model learned from different regions which harnesses the power of big data. <i>Methods</i>. Our model consists of a local network and a global network, which capture local features and global features, respectively. The local network is a fully convolutional network built up with depth-wise separable convolutions, and the global network uses dilated convolution to enlarge the receptive field to model global dependencies. <i>Results</i>. We evaluate our model on four 2D X-ray image datasets totaling 1710 images and 72 landmarks in four anatomical regions. Extensive experimental results show that our model improves the detection accuracy compared to the state-of-the-art methods. <i>Conclusion</i>. Our model makes the first attempt to train a single network on multiple datasets for landmark detection. Experimental results qualitatively and quantitatively show that our proposed model performs better than other models trained on multiple datasets and even better than models trained on a single dataset separately.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":null,"pages":null},"PeriodicalIF":5.0000,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521670/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BME frontiers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.34133/2022/9765095","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2022/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Objective and Impact Statement. In this work, we develop a universal anatomical landmark detection model which learns once from multiple datasets corresponding to different anatomical regions. Compared with the conventional model trained on a single dataset, this universal model not only is more light weighted and easier to train but also improves the accuracy of the anatomical landmark location. Introduction. The accurate and automatic localization of anatomical landmarks plays an essential role in medical image analysis. However, recent deep learning-based methods only utilize limited data from a single dataset. It is promising and desirable to build a model learned from different regions which harnesses the power of big data. Methods. Our model consists of a local network and a global network, which capture local features and global features, respectively. The local network is a fully convolutional network built up with depth-wise separable convolutions, and the global network uses dilated convolution to enlarge the receptive field to model global dependencies. Results. We evaluate our model on four 2D X-ray image datasets totaling 1710 images and 72 landmarks in four anatomical regions. Extensive experimental results show that our model improves the detection accuracy compared to the state-of-the-art methods. Conclusion. Our model makes the first attempt to train a single network on multiple datasets for landmark detection. Experimental results qualitatively and quantitatively show that our proposed model performs better than other models trained on multiple datasets and even better than models trained on a single dataset separately.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
学习用通用模型定位X射线图像中的交叉解剖标志。
目标和影响声明。在这项工作中,我们开发了一个通用的解剖标志检测模型,该模型从对应于不同解剖区域的多个数据集中学习一次。与在单个数据集上训练的传统模型相比,这种通用模型不仅更轻、更容易训练,而且提高了解剖标志定位的准确性。介绍解剖标志的准确和自动定位在医学图像分析中起着至关重要的作用。然而,最近基于深度学习的方法仅利用来自单个数据集的有限数据。建立一个从不同地区学习的模型,利用大数据的力量,是有希望和可取的。方法。我们的模型由局部网络和全局网络组成,分别捕获局部特征和全局特征。局部网络是由深度可分离卷积建立的完全卷积网络,全局网络使用扩张卷积来扩大感受野以对全局依赖性进行建模。后果我们在四个2D X射线图像数据集上评估了我们的模型,总共1710个图像和四个解剖区域的72个标志。大量的实验结果表明,与最先进的方法相比,我们的模型提高了检测精度。结论我们的模型首次尝试在多个数据集上训练单个网络进行地标检测。实验结果定性和定量地表明,我们提出的模型比在多个数据集上训练的其他模型表现更好,甚至比单独在单个数据集上培训的模型表现更好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
7.10
自引率
0.00%
发文量
0
审稿时长
16 weeks
期刊最新文献
A Janus Adhesive Hydrogel with Integrated Attack and Defense for Bacteria Killing and Antifouling. Cationized Decalcified Bone Matrix for Infected Bone Defect Treatment. Functional Neural Networks in Human Brain Organoids. What Is the Magical Cavitation Bubble: A Holistic Perspective to Trigger Advanced Bubbles, Nano-Sonocatalysts, and Cellular Sonosensitizers. Synergistic Assembly of 1DZnO and Anti-CYFRA 21-1: A Physicochemical Approach to Optical Biosensing.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1