Anatomic attention regions via optimal anatomy modeling and recognition for DL-based image segmentation.

Yadavendra Nln, J K Udupa, D Odhner, T Liu, Y Tong, D A Torigian
{"title":"Anatomic attention regions via optimal anatomy modeling and recognition for DL-based image segmentation.","authors":"Yadavendra Nln, J K Udupa, D Odhner, T Liu, Y Tong, D A Torigian","doi":"10.1117/12.3006771","DOIUrl":null,"url":null,"abstract":"<p><p>Organ segmentation is a crucial task in various medical imaging applications. Many deep learning models have been developed to do this, but they are slow and require a lot of computational resources. To solve this problem, attention mechanisms are used which can locate important objects of interest within medical images, allowing the model to segment them accurately even when there is noise or artifact. By paying attention to specific anatomical regions, the model becomes better at segmentation. Medical images have unique features in the form of anatomical information, which makes them different from natural images. Unfortunately, most deep learning methods either ignore this information or do not use it effectively and explicitly. Combined natural intelligence with artificial intelligence, known as hybrid intelligence, has shown promising results in medical image segmentation, making models more robust and able to perform well in challenging situations. In this paper, we propose several methods and models to find attention regions in medical images for deep learning-based segmentation via non-deep-learning methods. We developed these models and trained them using hybrid intelligence concepts. To evaluate their performance, we tested the models on unique test data and analyzed metrics including false negatives quotient and false positives quotient. Our findings demonstrate that object shape and layout variations can be explicitly learned to create computational models that are suitable for each anatomic object. This work opens new possibilities for advancements in medical image segmentation and analysis.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12930 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11218901/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of SPIE--the International Society for Optical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.3006771","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/4/2 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Organ segmentation is a crucial task in various medical imaging applications. Many deep learning models have been developed to do this, but they are slow and require a lot of computational resources. To solve this problem, attention mechanisms are used which can locate important objects of interest within medical images, allowing the model to segment them accurately even when there is noise or artifact. By paying attention to specific anatomical regions, the model becomes better at segmentation. Medical images have unique features in the form of anatomical information, which makes them different from natural images. Unfortunately, most deep learning methods either ignore this information or do not use it effectively and explicitly. Combined natural intelligence with artificial intelligence, known as hybrid intelligence, has shown promising results in medical image segmentation, making models more robust and able to perform well in challenging situations. In this paper, we propose several methods and models to find attention regions in medical images for deep learning-based segmentation via non-deep-learning methods. We developed these models and trained them using hybrid intelligence concepts. To evaluate their performance, we tested the models on unique test data and analyzed metrics including false negatives quotient and false positives quotient. Our findings demonstrate that object shape and layout variations can be explicitly learned to create computational models that are suitable for each anatomic object. This work opens new possibilities for advancements in medical image segmentation and analysis.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过最佳解剖建模和识别,为基于 DL 的图像分割提供解剖注意区域。
器官分割是各种医学成像应用中的一项重要任务。许多深度学习模型已被开发出来,但它们速度较慢,需要大量计算资源。为了解决这个问题,我们使用了注意力机制,它可以定位医学影像中重要的关注对象,使模型即使在有噪声或伪影的情况下也能准确地分割它们。通过关注特定的解剖区域,模型就能更好地进行分割。医学图像具有解剖信息形式的独特特征,这使得它们不同于自然图像。遗憾的是,大多数深度学习方法要么忽略了这些信息,要么没有有效、明确地利用这些信息。将自然智能与人工智能相结合,即所谓的混合智能,已在医学图像分割方面取得了可喜的成果,使模型更加稳健,并能在具有挑战性的情况下表现出色。在本文中,我们提出了几种方法和模型,通过非深度学习方法找到医学图像中的注意力区域,进行基于深度学习的分割。我们开发了这些模型,并利用混合智能概念对其进行了训练。为了评估它们的性能,我们在独特的测试数据上测试了这些模型,并分析了包括假阴性商数和假阳性商数在内的指标。我们的研究结果表明,物体的形状和布局变化可以通过显式学习来创建适合每个解剖物体的计算模型。这项工作为医学影像分割和分析的进步提供了新的可能性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
0.50
自引率
0.00%
发文量
0
期刊最新文献
Automated multi-lesion annotation in chest X-rays: annotating over 450,000 images from public datasets using the AI-based Smart Imagery Framing and Truthing (SIFT) system. High-Fidelity 3D Reconstruction for Accurate Anatomical Measurements in Endoscopic Sinus Surgery. Optimizing parylene and photoconductor thickness in indirect conversion amorphous selenium detectors. Intra- and inter-scanner CT variability and their impact on diagnostic tasks. Quantitative Accuracy of CT Protocols for Cross-sectional and Longitudinal Assessment of COPD: A Virtual Imaging Study.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1