交互式多尺度融合:通过跨 IMSM 模型推进脑肿瘤检测。

Vasanthi Durairaj, Palani Uthirapathy
{"title":"交互式多尺度融合:通过跨 IMSM 模型推进脑肿瘤检测。","authors":"Vasanthi Durairaj, Palani Uthirapathy","doi":"10.1007/s10278-024-01222-7","DOIUrl":null,"url":null,"abstract":"<p><p>Multi-modal medical image (MI) fusion assists in generating collaboration images collecting complement features through the distinct images of several conditions. The images help physicians to diagnose disease accurately. Hence, this research proposes a novel multi-modal MI fusion modal named guided filter-based interactive multi-scale and multi-modal transformer (Trans-IMSM) fusion approach to develop high-quality computed tomography-magnetic resonance imaging (CT-MRI) fused images for brain tumor detection. This research utilizes the CT and MRI brain scan dataset to gather the input CT and MRI images. At first, the data preprocessing is carried out to preprocess these input images to improve the image quality and generalization ability for further analysis. Then, these preprocessed CT and MRI are decomposed into detail and base components utilizing the guided filter-based MI decomposition approach. This approach involves two phases: such as acquiring the image guidance and decomposing the images utilizing the guided filter. A canny operator is employed to acquire the image guidance comprising robust edge for CT and MRI images, and the guided filter is applied to decompose the guidance and preprocessed images. Then, by applying the Trans-IMSM model, fuse the detail components, while a weighting approach is used for the base components. The fused detail and base components are subsequently processed through a gated fusion and reconstruction network, and the final fused images for brain tumor detection are generated. Extensive tests are carried out to compute the Trans-IMSM method's efficacy. The evaluation results demonstrated the robustness and effectiveness, achieving an accuracy of 98.64% and an SSIM of 0.94.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Interactive Multi-scale Fusion: Advancing Brain Tumor Detection Through Trans-IMSM Model.\",\"authors\":\"Vasanthi Durairaj, Palani Uthirapathy\",\"doi\":\"10.1007/s10278-024-01222-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Multi-modal medical image (MI) fusion assists in generating collaboration images collecting complement features through the distinct images of several conditions. The images help physicians to diagnose disease accurately. Hence, this research proposes a novel multi-modal MI fusion modal named guided filter-based interactive multi-scale and multi-modal transformer (Trans-IMSM) fusion approach to develop high-quality computed tomography-magnetic resonance imaging (CT-MRI) fused images for brain tumor detection. This research utilizes the CT and MRI brain scan dataset to gather the input CT and MRI images. At first, the data preprocessing is carried out to preprocess these input images to improve the image quality and generalization ability for further analysis. Then, these preprocessed CT and MRI are decomposed into detail and base components utilizing the guided filter-based MI decomposition approach. This approach involves two phases: such as acquiring the image guidance and decomposing the images utilizing the guided filter. A canny operator is employed to acquire the image guidance comprising robust edge for CT and MRI images, and the guided filter is applied to decompose the guidance and preprocessed images. Then, by applying the Trans-IMSM model, fuse the detail components, while a weighting approach is used for the base components. The fused detail and base components are subsequently processed through a gated fusion and reconstruction network, and the final fused images for brain tumor detection are generated. Extensive tests are carried out to compute the Trans-IMSM method's efficacy. The evaluation results demonstrated the robustness and effectiveness, achieving an accuracy of 98.64% and an SSIM of 0.94.</p>\",\"PeriodicalId\":516858,\"journal\":{\"name\":\"Journal of imaging informatics in medicine\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of imaging informatics in medicine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s10278-024-01222-7\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of imaging informatics in medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10278-024-01222-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

多模态医学图像(MI)融合有助于生成协作图像,通过几种病症的不同图像收集互补特征。这些图像有助于医生准确诊断疾病。因此,本研究提出了一种新颖的多模态医学图像融合模式,命名为基于引导滤波器的交互式多尺度多模态变换器(Trans-ISM)融合方法,以开发用于脑肿瘤检测的高质量计算机断层扫描-磁共振成像(CT-MRI)融合图像。这项研究利用 CT 和 MRI 脑扫描数据集收集输入的 CT 和 MRI 图像。首先,对这些输入图像进行数据预处理,以提高图像质量和进一步分析的概括能力。然后,利用基于引导滤波器的 MI 分解方法,将这些预处理后的 CT 和 MRI 分解为细节和基本组件。该方法包括两个阶段:获取图像引导和利用引导滤波器分解图像。在获取由 CT 和 MRI 图像的稳健边缘组成的图像引导时,使用了 Canny 运算器,并应用引导滤波器对引导图像和预处理图像进行分解。然后,通过应用 Trans-IMSM 模型,对细节部分进行融合,同时对基本部分采用加权方法。融合后的细节成分和基础成分随后通过门控融合与重建网络进行处理,最终生成用于脑肿瘤检测的融合图像。为了计算 Trans-IMSM 方法的有效性,我们进行了广泛的测试。评估结果证明了该方法的稳健性和有效性,准确率达到 98.64%,SSIM 为 0.94。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Interactive Multi-scale Fusion: Advancing Brain Tumor Detection Through Trans-IMSM Model.

Multi-modal medical image (MI) fusion assists in generating collaboration images collecting complement features through the distinct images of several conditions. The images help physicians to diagnose disease accurately. Hence, this research proposes a novel multi-modal MI fusion modal named guided filter-based interactive multi-scale and multi-modal transformer (Trans-IMSM) fusion approach to develop high-quality computed tomography-magnetic resonance imaging (CT-MRI) fused images for brain tumor detection. This research utilizes the CT and MRI brain scan dataset to gather the input CT and MRI images. At first, the data preprocessing is carried out to preprocess these input images to improve the image quality and generalization ability for further analysis. Then, these preprocessed CT and MRI are decomposed into detail and base components utilizing the guided filter-based MI decomposition approach. This approach involves two phases: such as acquiring the image guidance and decomposing the images utilizing the guided filter. A canny operator is employed to acquire the image guidance comprising robust edge for CT and MRI images, and the guided filter is applied to decompose the guidance and preprocessed images. Then, by applying the Trans-IMSM model, fuse the detail components, while a weighting approach is used for the base components. The fused detail and base components are subsequently processed through a gated fusion and reconstruction network, and the final fused images for brain tumor detection are generated. Extensive tests are carried out to compute the Trans-IMSM method's efficacy. The evaluation results demonstrated the robustness and effectiveness, achieving an accuracy of 98.64% and an SSIM of 0.94.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Dual Energy CT for Deep Learning-Based Segmentation and Volumetric Estimation of Early Ischemic Infarcts. Empowering Women in Imaging Informatics: Confronting Imposter Syndrome, Addressing Microaggressions, and Striving for Work-Life Harmony. Deep Conformal Supervision: Leveraging Intermediate Features for Robust Uncertainty Quantification. Leveraging Ensemble Models and Follow-up Data for Accurate Prediction of mRS Scores from Radiomic Features of DSC-PWI Images. A Lightweight Method for Breast Cancer Detection Using Thermography Images with Optimized CNN Feature and Efficient Classification.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1