Accurate Whole-Brain Segmentation for Bimodal PET/MR Images via a Cross-Attention Mechanism

IF 3.5 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING IEEE Transactions on Radiation and Plasma Medical Sciences Pub Date : 2024-06-13 DOI:10.1109/TRPMS.2024.3413862
Wenbo Li;Zhenxing Huang;Qiyang Zhang;Na Zhang;Wenjie Zhao;Yaping Wu;Jianmin Yuan;Yang Yang;Yan Zhang;Yongfeng Yang;Hairong Zheng;Dong Liang;Meiyun Wang;Zhanli Hu
{"title":"Accurate Whole-Brain Segmentation for Bimodal PET/MR Images via a Cross-Attention Mechanism","authors":"Wenbo Li;Zhenxing Huang;Qiyang Zhang;Na Zhang;Wenjie Zhao;Yaping Wu;Jianmin Yuan;Yang Yang;Yan Zhang;Yongfeng Yang;Hairong Zheng;Dong Liang;Meiyun Wang;Zhanli Hu","doi":"10.1109/TRPMS.2024.3413862","DOIUrl":null,"url":null,"abstract":"The PET/MRI system plays a significant role in the functional and anatomical quantification of the brain, providing accurate diagnostic data for a variety of brain disorders. However, most of the current methods for segmenting the brain are based on unimodal MRI and rarely combine structural and functional dual-modality information. Therefore, we aimed to employ deep-learning techniques to achieve automatic and accurate segmentation of the whole brain while incorporating functional and anatomical information. To leverage dual-modality information, a novel 3-D network with a cross-attention module was proposed to capture the correlation between dual-modality features and improve segmentation accuracy. Moreover, several deep-learning methods were employed as comparison measures to evaluate the model performance, with the dice similarity coefficient (DSC), Jaccard index (JAC), recall, and precision serving as quantitative metrics. Experimental results demonstrated our advantages in whole-brain segmentation, achieving an 85.35% DSC, 77.22% JAC, 88.86% recall, and 84.81% precision, which were better than those comparative methods. In addition, consistent and correlated analyses based on segmentation results also demonstrated that our approach achieved superior performance. In future work, we will try to apply our method to other multimodal tasks, such as PET/CT data analysis.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 1","pages":"47-56"},"PeriodicalIF":3.5000,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Radiation and Plasma Medical Sciences","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10556684/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

The PET/MRI system plays a significant role in the functional and anatomical quantification of the brain, providing accurate diagnostic data for a variety of brain disorders. However, most of the current methods for segmenting the brain are based on unimodal MRI and rarely combine structural and functional dual-modality information. Therefore, we aimed to employ deep-learning techniques to achieve automatic and accurate segmentation of the whole brain while incorporating functional and anatomical information. To leverage dual-modality information, a novel 3-D network with a cross-attention module was proposed to capture the correlation between dual-modality features and improve segmentation accuracy. Moreover, several deep-learning methods were employed as comparison measures to evaluate the model performance, with the dice similarity coefficient (DSC), Jaccard index (JAC), recall, and precision serving as quantitative metrics. Experimental results demonstrated our advantages in whole-brain segmentation, achieving an 85.35% DSC, 77.22% JAC, 88.86% recall, and 84.81% precision, which were better than those comparative methods. In addition, consistent and correlated analyses based on segmentation results also demonstrated that our approach achieved superior performance. In future work, we will try to apply our method to other multimodal tasks, such as PET/CT data analysis.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于交叉注意机制的PET/MR双峰图像全脑准确分割
PET/MRI系统在脑功能和解剖量化方面发挥着重要作用,为各种脑疾病提供准确的诊断数据。然而,目前大多数大脑分割方法都是基于单峰MRI,很少结合结构和功能双峰信息。因此,我们的目标是利用深度学习技术,在结合功能和解剖信息的同时,实现对整个大脑的自动准确分割。为了充分利用双模态信息,提出了一种新的具有交叉注意模块的三维网络,以捕获双模态特征之间的相关性,提高分割精度。此外,采用几种深度学习方法作为比较指标,以骰子相似系数(DSC)、Jaccard指数(JAC)、召回率(recall)和精度(precision)作为定量指标来评估模型的性能。实验结果显示了我们在全脑分割方面的优势,DSC为85.35%,JAC为77.22%,查全率为88.86%,查准率为84.81%,优于其他对比方法。此外,基于分割结果的一致性和相关性分析也证明了我们的方法取得了优异的性能。在未来的工作中,我们将尝试将我们的方法应用于其他多模态任务,例如PET/CT数据分析。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Radiation and Plasma Medical Sciences
IEEE Transactions on Radiation and Plasma Medical Sciences RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
8.00
自引率
18.20%
发文量
109
期刊最新文献
Table of Contents Affiliate Plan of the IEEE Nuclear and Plasma Sciences Society IEEE DataPort IEEE Transactions on Radiation and Plasma Medical Sciences Information for Authors IEEE Transactions on Radiation and Plasma Medical Sciences Publication Information
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1