首页 > 最新文献

Journal of imaging informatics in medicine最新文献

英文 中文
Lesion Classification by Model-Based Feature Extraction: A Differential Affine Invariant Model of Soft Tissue Elasticity in CT Images. 通过基于模型的特征提取进行病变分类:CT 图像中软组织弹性的差分仿射不变模型
Pub Date : 2024-08-20 DOI: 10.1007/s10278-024-01178-8
Weiguo Cao, Marc J Pomeroy, Zhengrong Liang, Yongfeng Gao, Yongyi Shi, Jiaxing Tan, Fangfang Han, Jing Wang, Jianhua Ma, Hongbin Lu, Almas F Abbasi, Perry J Pickhardt

The elasticity of soft tissues has been widely considered a characteristic property for differentiation of healthy and lesions and, therefore, motivated the development of several elasticity imaging modalities, for example, ultrasound elastography, magnetic resonance elastography, and optical coherence elastography to directly measure the tissue elasticity. This paper proposes an alternative approach of modeling the elasticity for prior knowledge-based extraction of tissue elastic characteristic features for machine learning (ML) lesion classification using computed tomography (CT) imaging modality. The model describes a dynamic non-rigid (or elastic) soft tissue deformation in differential manifold to mimic the tissues' elasticity under wave fluctuation in vivo. Based on the model, a local deformation invariant is formulated using the 1st and 2nd order derivatives of the lesion volumetric CT image and used to generate elastic feature map of the lesion volume. From the feature map, tissue elastic features are extracted and fed to ML to perform lesion classification. Two pathologically proven image datasets of colon polyps and lung nodules were used to test the modeling strategy. The outcomes reached the score of area under the curve of receiver operating characteristics of 94.2% for the polyps and 87.4% for the nodules, resulting in an average gain of 5 to 20% over several existing state-of-the-art image feature-based lesion classification methods. The gain demonstrates the importance of extracting tissue characteristic features for lesion classification, instead of extracting image features, which can include various image artifacts and may vary for different protocols in image acquisition and different imaging modalities.

软组织的弹性被广泛认为是区分健康组织和病变组织的一个特征特性,因此,人们开发了多种弹性成像模式,如超声弹性成像、磁共振弹性成像和光学相干弹性成像,以直接测量组织弹性。本文提出了一种弹性建模的替代方法,以基于先验知识提取组织弹性特征,用于使用计算机断层扫描(CT)成像模式进行机器学习(ML)病变分类。该模型在差分流形中描述了非刚性(或弹性)软组织的动态形变,以模拟组织在活体波波动下的弹性。根据该模型,利用病变容积 CT 图像的一阶和二阶导数制定了局部变形不变量,并用于生成病变容积的弹性特征图。从特征图中提取组织弹性特征,并将其输入 ML 以进行病变分类。结肠息肉和肺结节这两个病理证实的图像数据集被用来测试建模策略。结果显示,息肉的接收者操作特征曲线下面积得分率为 94.2%,结节的接收者操作特征曲线下面积得分率为 87.4%,与现有的几种最先进的基于图像特征的病变分类方法相比,平均增益 5% 至 20%。这种增益表明了提取组织特征对病变分类的重要性,而不是提取图像特征,因为图像特征可能包括各种图像伪影,而且在不同的图像采集协议和不同的成像模式下可能会有所不同。
{"title":"Lesion Classification by Model-Based Feature Extraction: A Differential Affine Invariant Model of Soft Tissue Elasticity in CT Images.","authors":"Weiguo Cao, Marc J Pomeroy, Zhengrong Liang, Yongfeng Gao, Yongyi Shi, Jiaxing Tan, Fangfang Han, Jing Wang, Jianhua Ma, Hongbin Lu, Almas F Abbasi, Perry J Pickhardt","doi":"10.1007/s10278-024-01178-8","DOIUrl":"https://doi.org/10.1007/s10278-024-01178-8","url":null,"abstract":"<p><p>The elasticity of soft tissues has been widely considered a characteristic property for differentiation of healthy and lesions and, therefore, motivated the development of several elasticity imaging modalities, for example, ultrasound elastography, magnetic resonance elastography, and optical coherence elastography to directly measure the tissue elasticity. This paper proposes an alternative approach of modeling the elasticity for prior knowledge-based extraction of tissue elastic characteristic features for machine learning (ML) lesion classification using computed tomography (CT) imaging modality. The model describes a dynamic non-rigid (or elastic) soft tissue deformation in differential manifold to mimic the tissues' elasticity under wave fluctuation in vivo. Based on the model, a local deformation invariant is formulated using the 1<sup>st</sup> and 2<sup>nd</sup> order derivatives of the lesion volumetric CT image and used to generate elastic feature map of the lesion volume. From the feature map, tissue elastic features are extracted and fed to ML to perform lesion classification. Two pathologically proven image datasets of colon polyps and lung nodules were used to test the modeling strategy. The outcomes reached the score of area under the curve of receiver operating characteristics of 94.2% for the polyps and 87.4% for the nodules, resulting in an average gain of 5 to 20% over several existing state-of-the-art image feature-based lesion classification methods. The gain demonstrates the importance of extracting tissue characteristic features for lesion classification, instead of extracting image features, which can include various image artifacts and may vary for different protocols in image acquisition and different imaging modalities.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142010199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DECNet: Left Atrial Pulmonary Vein Class Imbalance Classification Network. DECNet:左心房肺静脉等级失衡分类网络。
Pub Date : 2024-08-20 DOI: 10.1007/s10278-024-01221-8
GuoDong Zhang, WenWen Gu, TingYu Liang, YanLin Li, Wei Guo, ZhaoXuan Gong, RongHui Ju

In clinical practice, the anatomical classification of pulmonary veins plays a crucial role in the preoperative assessment of atrial fibrillation radiofrequency ablation surgery. Accurate classification of pulmonary vein anatomy assists physicians in selecting appropriate mapping electrodes and avoids causing pulmonary arterial hypertension. Due to the diverse and subtly different anatomical classifications of pulmonary veins, as well as the imbalance in data distribution, deep learning models often exhibit poor expression capability in extracting deep features, leading to misjudgments and affecting classification accuracy. Therefore, in order to solve the problem of unbalanced classification of left atrial pulmonary veins, this paper proposes a network integrating multi-scale feature-enhanced attention and dual-feature extraction classifiers, called DECNet. The multi-scale feature-enhanced attention utilizes multi-scale information to guide the reinforcement of deep features, generating channel weights and spatial weights to enhance the expression capability of deep features. The dual-feature extraction classifier assigns a fixed number of channels to each category, equally evaluating all categories, thus alleviating the learning bias and overfitting caused by data imbalance. By combining the two, the expression capability of deep features is strengthened, achieving accurate classification of left atrial pulmonary vein morphology and providing support for subsequent clinical treatment. The proposed method is evaluated on datasets provided by the People's Hospital of Liaoning Province and the publicly available DermaMNIST dataset, achieving average accuracies of 78.81% and 83.44%, respectively, demonstrating the effectiveness of the proposed approach.

在临床实践中,肺静脉的解剖学分类在心房颤动射频消融手术的术前评估中起着至关重要的作用。准确的肺静脉解剖学分类有助于医生选择合适的映射电极,避免引起肺动脉高压。由于肺静脉解剖分类的多样性和细微差别,以及数据分布的不平衡性,深度学习模型在提取深层特征时往往表现出较差的表达能力,导致误判,影响分类的准确性。因此,为了解决左房肺静脉分类不均衡的问题,本文提出了一种集成多尺度特征增强注意力和双特征提取分类器的网络,称为 DECNet。多尺度特征增强注意力利用多尺度信息指导深度特征的增强,生成通道权重和空间权重,增强深度特征的表达能力。双特征提取分类器为每个类别分配固定数量的通道,平等地评估所有类别,从而减轻了数据不平衡造成的学习偏差和过拟合。通过两者的结合,增强了深度特征的表达能力,实现了对左房肺静脉形态的准确分类,为后续临床治疗提供了支持。该方法在辽宁省人民医院提供的数据集和公开的DermaMNIST数据集上进行了评估,平均准确率分别达到78.81%和83.44%,证明了该方法的有效性。
{"title":"DECNet: Left Atrial Pulmonary Vein Class Imbalance Classification Network.","authors":"GuoDong Zhang, WenWen Gu, TingYu Liang, YanLin Li, Wei Guo, ZhaoXuan Gong, RongHui Ju","doi":"10.1007/s10278-024-01221-8","DOIUrl":"https://doi.org/10.1007/s10278-024-01221-8","url":null,"abstract":"<p><p>In clinical practice, the anatomical classification of pulmonary veins plays a crucial role in the preoperative assessment of atrial fibrillation radiofrequency ablation surgery. Accurate classification of pulmonary vein anatomy assists physicians in selecting appropriate mapping electrodes and avoids causing pulmonary arterial hypertension. Due to the diverse and subtly different anatomical classifications of pulmonary veins, as well as the imbalance in data distribution, deep learning models often exhibit poor expression capability in extracting deep features, leading to misjudgments and affecting classification accuracy. Therefore, in order to solve the problem of unbalanced classification of left atrial pulmonary veins, this paper proposes a network integrating multi-scale feature-enhanced attention and dual-feature extraction classifiers, called DECNet. The multi-scale feature-enhanced attention utilizes multi-scale information to guide the reinforcement of deep features, generating channel weights and spatial weights to enhance the expression capability of deep features. The dual-feature extraction classifier assigns a fixed number of channels to each category, equally evaluating all categories, thus alleviating the learning bias and overfitting caused by data imbalance. By combining the two, the expression capability of deep features is strengthened, achieving accurate classification of left atrial pulmonary vein morphology and providing support for subsequent clinical treatment. The proposed method is evaluated on datasets provided by the People's Hospital of Liaoning Province and the publicly available DermaMNIST dataset, achieving average accuracies of 78.81% and 83.44%, respectively, demonstrating the effectiveness of the proposed approach.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142010198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2023 Industry Perceptions Survey on AI Adoption and Return on Investment. 2023 年人工智能应用和投资回报行业认知调查。
Pub Date : 2024-08-20 DOI: 10.1007/s10278-024-01147-1
Mitchell Goldburgh, Michael LaChance, Julia Komissarchik, Julia Patriarche, Joe Chapa, Oliver Chen, Priya Deshpande, Matthew Geeslin, Julia Komissarchik, Nina Kottler, Julia Patriarche, Jennifer Sommer, Marcus Ayers, Vedrana Vujic

This SIIM-sponsored 2023 report highlights an industry view on artificial intelligence adoption barriers and success related to diagnostic imaging, life sciences, and contrasts. In general, our 2023 survey indicates that there has been progress in adopting AI across multiple uses, and there continues to be an optimistic forecast for the impact on workflow and clinical outcomes. This report, as in prior years, should be seen as a snapshot of the use of AI in imaging. Compared to our 2021 survey, the 2023 respondents expressed wider AI adoption but felt this was behind the potential. Specifically, the adoption has increased as sources of return on investment with AI in radiology are better understood as documented by vendor/client use case studies. Generally, the discussions of AI solutions centered on workflow triage, visualization, detection, and characterization. Generative AI was also mentioned for improving productivity in reporting. As payor reimbursement remains elusive, the ROI discussions expanded to look at other factors, including increased hospital procedures and admissions, enhanced radiologist productivity for practices, and improved patient outcomes for integrated health networks. When looking at the longer-term horizon for AI adoption, respondents frequently mentioned that the opportunity for AI to achieve greater adoption with more complex AI and a more manageable/visible ROI is outside the USA. Respondents focused on the barriers to trust in AI and the FDA processes.

这份由 SIIM 赞助的 2023 年报告重点介绍了与诊断成像、生命科学和对比度有关的人工智能应用障碍和成功的行业观点。总体而言,我们的 2023 年调查表明,在多种用途中采用人工智能方面取得了进展,而且对人工智能对工作流程和临床结果的影响仍持乐观预测。与往年一样,本报告应被视为人工智能在成像领域应用的一个缩影。与 2021 年的调查相比,2023 年的受访者表示更广泛地采用了人工智能,但他们认为这还远远不够。具体来说,随着人们对人工智能在放射学中的投资回报来源有了更好的了解,供应商/客户的使用案例研究也证明了这一点。一般来说,人工智能解决方案的讨论集中在工作流程分流、可视化、检测和特征描述方面。此外,还提到了用于提高报告效率的生成性人工智能。由于支付方的报销仍然遥遥无期,投资回报率的讨论扩展到了其他因素,包括医院手术和入院人数的增加、放射医师工作效率的提高以及综合医疗网络患者治疗效果的改善。在展望人工智能应用的长远前景时,受访者经常提到,人工智能在美国以外的地区有机会通过更复杂的人工智能和更易于管理/可见的投资回报率获得更广泛的应用。受访者关注的重点是人工智能和食品与药物管理局流程的信任障碍。
{"title":"2023 Industry Perceptions Survey on AI Adoption and Return on Investment.","authors":"Mitchell Goldburgh, Michael LaChance, Julia Komissarchik, Julia Patriarche, Joe Chapa, Oliver Chen, Priya Deshpande, Matthew Geeslin, Julia Komissarchik, Nina Kottler, Julia Patriarche, Jennifer Sommer, Marcus Ayers, Vedrana Vujic","doi":"10.1007/s10278-024-01147-1","DOIUrl":"https://doi.org/10.1007/s10278-024-01147-1","url":null,"abstract":"<p><p>This SIIM-sponsored 2023 report highlights an industry view on artificial intelligence adoption barriers and success related to diagnostic imaging, life sciences, and contrasts. In general, our 2023 survey indicates that there has been progress in adopting AI across multiple uses, and there continues to be an optimistic forecast for the impact on workflow and clinical outcomes. This report, as in prior years, should be seen as a snapshot of the use of AI in imaging. Compared to our 2021 survey, the 2023 respondents expressed wider AI adoption but felt this was behind the potential. Specifically, the adoption has increased as sources of return on investment with AI in radiology are better understood as documented by vendor/client use case studies. Generally, the discussions of AI solutions centered on workflow triage, visualization, detection, and characterization. Generative AI was also mentioned for improving productivity in reporting. As payor reimbursement remains elusive, the ROI discussions expanded to look at other factors, including increased hospital procedures and admissions, enhanced radiologist productivity for practices, and improved patient outcomes for integrated health networks. When looking at the longer-term horizon for AI adoption, respondents frequently mentioned that the opportunity for AI to achieve greater adoption with more complex AI and a more manageable/visible ROI is outside the USA. Respondents focused on the barriers to trust in AI and the FDA processes.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142010196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Confidence-Aware Severity Assessment of Lung Disease from Chest X-Rays Using Deep Neural Network on a Multi-Reader Dataset. 在多读取器数据集上使用深度神经网络对胸部 X 光片进行可信度感知的肺病严重程度评估
Pub Date : 2024-08-20 DOI: 10.1007/s10278-024-01151-5
Mohammadreza Zandehshahvar, Marly van Assen, Eun Kim, Yashar Kiarashi, Vikranth Keerthipati, Giovanni Tessarin, Emanuele Muscogiuri, Arthur E Stillman, Peter Filev, Amir H Davarpanah, Eugene A Berkowitz, Stefan Tigges, Scott J Lee, Brianna L Vey, Carlo De Cecco, Ali Adibi

In this study, we present a method based on Monte Carlo Dropout (MCD) as Bayesian neural network (BNN) approximation for confidence-aware severity classification of lung diseases in COVID-19 patients using chest X-rays (CXRs). Trained and tested on 1208 CXRs from Hospital 1 in the USA, the model categorizes severity into four levels (i.e., normal, mild, moderate, and severe) based on lung consolidation and opacity. Severity labels, determined by the median consensus of five radiologists, serve as the reference standard. The model's performance is internally validated against evaluations from an additional radiologist and two residents that were excluded from the median. The performance of the model is further evaluated on additional internal and external datasets comprising 2200 CXRs from the same hospital and 1300 CXRs from Hospital 2 in South Korea. The model achieves an average area under the curve (AUC) of 0.94 ± 0.01 across all classes in the primary dataset, surpassing human readers in each severity class and achieves a higher Kendall correlation coefficient (KCC) of 0.80 ± 0.03. The performance of the model is consistent across varied datasets, highlighting its generalization. A key aspect of the model is its predictive uncertainty (PU), which is inversely related to the level of agreement among radiologists, particularly in mild and moderate cases. The study concludes that the model outperforms human readers in severity assessment and maintains consistent accuracy across diverse datasets. Its ability to provide confidence measures in predictions is pivotal for potential clinical use, underscoring the BNN's role in enhancing diagnostic precision in lung disease analysis through CXR.

在本研究中,我们提出了一种基于蒙特卡洛剔除(MCD)的贝叶斯神经网络(BNN)近似方法,用于使用胸部 X 光片(CXR)对 COVID-19 患者的肺部疾病进行可信度感知严重程度分类。该模型对来自美国第一医院的 1208 张 CXR 进行了训练和测试,根据肺部合并症和肺不张将严重程度分为四级(即正常、轻度、中度和重度)。严重程度标签由五位放射科医生的中位共识确定,作为参考标准。该模型的性能根据另外一名放射科医生和两名住院医生的评估结果进行了内部验证,这些评估结果被排除在中位数之外。该模型的性能还在其他内部和外部数据集上进行了进一步评估,这些数据集包括来自同一医院的 2200 张 CXR 和来自韩国第二医院的 1300 张 CXR。该模型在主要数据集的所有等级中的平均曲线下面积(AUC)达到了 0.94 ± 0.01,在每个严重程度等级中都超过了人类读者,并达到了 0.80 ± 0.03 的较高 Kendall 相关系数(KCC)。该模型在不同数据集上的表现一致,突出了其通用性。该模型的一个关键方面是其预测不确定性(PU),它与放射科医生之间的一致程度成反比,尤其是在轻度和中度病例中。研究得出结论,该模型在严重程度评估方面优于人类读者,并在不同数据集中保持一致的准确性。它在预测中提供置信度的能力对潜在的临床应用至关重要,突出了 BNN 在通过 CXR 提高肺病分析诊断精确度方面的作用。
{"title":"Confidence-Aware Severity Assessment of Lung Disease from Chest X-Rays Using Deep Neural Network on a Multi-Reader Dataset.","authors":"Mohammadreza Zandehshahvar, Marly van Assen, Eun Kim, Yashar Kiarashi, Vikranth Keerthipati, Giovanni Tessarin, Emanuele Muscogiuri, Arthur E Stillman, Peter Filev, Amir H Davarpanah, Eugene A Berkowitz, Stefan Tigges, Scott J Lee, Brianna L Vey, Carlo De Cecco, Ali Adibi","doi":"10.1007/s10278-024-01151-5","DOIUrl":"https://doi.org/10.1007/s10278-024-01151-5","url":null,"abstract":"<p><p>In this study, we present a method based on Monte Carlo Dropout (MCD) as Bayesian neural network (BNN) approximation for confidence-aware severity classification of lung diseases in COVID-19 patients using chest X-rays (CXRs). Trained and tested on 1208 CXRs from Hospital 1 in the USA, the model categorizes severity into four levels (i.e., normal, mild, moderate, and severe) based on lung consolidation and opacity. Severity labels, determined by the median consensus of five radiologists, serve as the reference standard. The model's performance is internally validated against evaluations from an additional radiologist and two residents that were excluded from the median. The performance of the model is further evaluated on additional internal and external datasets comprising 2200 CXRs from the same hospital and 1300 CXRs from Hospital 2 in South Korea. The model achieves an average area under the curve (AUC) of 0.94 ± 0.01 across all classes in the primary dataset, surpassing human readers in each severity class and achieves a higher Kendall correlation coefficient (KCC) of 0.80 ± 0.03. The performance of the model is consistent across varied datasets, highlighting its generalization. A key aspect of the model is its predictive uncertainty (PU), which is inversely related to the level of agreement among radiologists, particularly in mild and moderate cases. The study concludes that the model outperforms human readers in severity assessment and maintains consistent accuracy across diverse datasets. Its ability to provide confidence measures in predictions is pivotal for potential clinical use, underscoring the BNN's role in enhancing diagnostic precision in lung disease analysis through CXR.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142010197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Model for Non-invasive Hemoglobin Estimation via Body Parts Images: A Retrospective Analysis and a Prospective Emergency Department Study. 基于深度学习的人体部位图像无创血红蛋白估算模型:回顾性分析和前瞻性急诊科研究。
Pub Date : 2024-08-19 DOI: 10.1007/s10278-024-01209-4
En-Ting Lin, Shao-Chi Lu, An-Sheng Liu, Chia-Hsin Ko, Chien-Hua Huang, Chu-Lin Tsai, Li-Chen Fu

Anemia is a significant global health issue, affecting over a billion people worldwide, according to the World Health Organization. Generally, the gold standard for diagnosing anemia relies on laboratory measurements of hemoglobin. To meet the need in clinical practice, physicians often rely on visual examination of specific areas, such as conjunctiva, to assess pallor. However, this method is subjective and relies on the physician's experience. Therefore, we proposed a deep learning prediction model based on three input images from different body parts, namely, conjunctiva, palm, and fingernail. By incorporating additional body part labels and employing a fusion attention mechanism, the model learns and enhances the salient features of each body part during training, enabling it to produce reliable results. Additionally, we employ a dual loss function that allows the regression model to benefit from well-established classification methods, thereby achieving stable handling of minority samples. We used a retrospective data set (EYES-DEFY-ANEMIA) to develop this model called Body-Part-Anemia Network (BPANet). The BPANet showed excellent performance in detecting anemia, with accuracy of 0.849 and an F1-score of 0.828. Our multi-body-part model has been validated on a prospectively collected data set of 101 patients in National Taiwan University Hospital. The prediction accuracy as well as F1-score can achieve as high as 0.716 and 0.788, respectively. To sum up, we have developed and validated a novel non-invasive hemoglobin prediction model based on image input from multiple body parts, with the potential of real-time use at home and in clinical settings.

贫血是一个重大的全球健康问题,根据世界卫生组织的数据,全球有超过 10 亿人受到贫血的影响。一般来说,诊断贫血的金标准依赖于血红蛋白的实验室测量。为了满足临床实践的需要,医生通常依靠目测特定部位(如结膜)来评估苍白程度。然而,这种方法比较主观,依赖于医生的经验。因此,我们提出了一种基于结膜、手掌和指甲三个不同身体部位输入图像的深度学习预测模型。通过加入额外的身体部位标签并采用融合注意力机制,该模型在训练过程中学习并增强了每个身体部位的显著特征,从而使其能够产生可靠的结果。此外,我们还采用了双重损失函数,使回归模型能够从成熟的分类方法中获益,从而实现对少数样本的稳定处理。我们使用了一个回顾性数据集(EYES-DEFY-ANEMIA)来开发这个名为 "身体部位-贫血网络(BPANet)"的模型。BPANet 在检测贫血方面表现出色,准确率为 0.849,F1 分数为 0.828。我们的多身体部分模型已在台湾大学医院收集的 101 名患者的前瞻性数据集上进行了验证。预测准确率和 F1 分数分别高达 0.716 和 0.788。总之,我们开发并验证了一种基于身体多部位图像输入的新型无创血红蛋白预测模型,有望在家庭和临床环境中实时使用。
{"title":"Deep Learning-Based Model for Non-invasive Hemoglobin Estimation via Body Parts Images: A Retrospective Analysis and a Prospective Emergency Department Study.","authors":"En-Ting Lin, Shao-Chi Lu, An-Sheng Liu, Chia-Hsin Ko, Chien-Hua Huang, Chu-Lin Tsai, Li-Chen Fu","doi":"10.1007/s10278-024-01209-4","DOIUrl":"https://doi.org/10.1007/s10278-024-01209-4","url":null,"abstract":"<p><p>Anemia is a significant global health issue, affecting over a billion people worldwide, according to the World Health Organization. Generally, the gold standard for diagnosing anemia relies on laboratory measurements of hemoglobin. To meet the need in clinical practice, physicians often rely on visual examination of specific areas, such as conjunctiva, to assess pallor. However, this method is subjective and relies on the physician's experience. Therefore, we proposed a deep learning prediction model based on three input images from different body parts, namely, conjunctiva, palm, and fingernail. By incorporating additional body part labels and employing a fusion attention mechanism, the model learns and enhances the salient features of each body part during training, enabling it to produce reliable results. Additionally, we employ a dual loss function that allows the regression model to benefit from well-established classification methods, thereby achieving stable handling of minority samples. We used a retrospective data set (EYES-DEFY-ANEMIA) to develop this model called Body-Part-Anemia Network (BPANet). The BPANet showed excellent performance in detecting anemia, with accuracy of 0.849 and an F1-score of 0.828. Our multi-body-part model has been validated on a prospectively collected data set of 101 patients in National Taiwan University Hospital. The prediction accuracy as well as F1-score can achieve as high as 0.716 and 0.788, respectively. To sum up, we have developed and validated a novel non-invasive hemoglobin prediction model based on image input from multiple body parts, with the potential of real-time use at home and in clinical settings.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142006261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Revisions to Insights: Converting Radiology Report Revisions into Actionable Educational Feedback Using Generative AI Models. 从修订到洞察:利用生成式人工智能模型将放射学报告修订版转化为可操作的教育反馈。
Pub Date : 2024-08-19 DOI: 10.1007/s10278-024-01233-4
Shawn Lyo, Suyash Mohan, Alvand Hassankhani, Abass Noor, Farouk Dako, Tessa Cook

Expert feedback on trainees' preliminary reports is crucial for radiologic training, but real-time feedback can be challenging due to non-contemporaneous, remote reading and increasing imaging volumes. Trainee report revisions contain valuable educational feedback, but synthesizing data from raw revisions is challenging. Generative AI models can potentially analyze these revisions and provide structured, actionable feedback. This study used the OpenAI GPT-4 Turbo API to analyze paired synthesized and open-source analogs of preliminary and finalized reports, identify discrepancies, categorize their severity and type, and suggest review topics. Expert radiologists reviewed the output by grading discrepancies, evaluating the severity and category accuracy, and suggested review topic relevance. The reproducibility of discrepancy detection and maximal discrepancy severity was also examined. The model exhibited high sensitivity, detecting significantly more discrepancies than radiologists (W = 19.0, p < 0.001) with a strong positive correlation (r = 0.778, p < 0.001). Interrater reliability for severity and type were fair (Fleiss' kappa = 0.346 and 0.340, respectively; weighted kappa = 0.622 for severity). The LLM achieved a weighted F1 score of 0.66 for severity and 0.64 for type. Generated teaching points were considered relevant in ~ 85% of cases, and relevance correlated with the maximal discrepancy severity (Spearman ρ = 0.76, p < 0.001). The reproducibility was moderate to good (ICC (2,1) = 0.690) for the number of discrepancies and substantial for maximal discrepancy severity (Fleiss' kappa = 0.718; weighted kappa = 0.94). Generative AI models can effectively identify discrepancies in report revisions and generate relevant educational feedback, offering promise for enhancing radiology training.

专家对学员初步报告的反馈意见对放射学培训至关重要,但由于非同步、远程阅片和成像量不断增加,实时反馈可能具有挑战性。学员报告的修改包含宝贵的教育反馈,但从原始修改中综合数据是一项挑战。生成式人工智能模型有可能分析这些修订,并提供结构化、可操作的反馈。本研究使用 OpenAI GPT-4 Turbo API 分析初步报告和最终报告的配对合成和开源模拟,识别差异,对其严重程度和类型进行分类,并提出审查主题。放射科专家通过对差异进行分级、评估严重程度和类别的准确性以及建议审查主题的相关性来审查输出结果。此外,还对差异检测的再现性和最大差异严重程度进行了检查。该模型表现出很高的灵敏度,检测到的差异明显多于放射科医生(W = 19.0,p
{"title":"From Revisions to Insights: Converting Radiology Report Revisions into Actionable Educational Feedback Using Generative AI Models.","authors":"Shawn Lyo, Suyash Mohan, Alvand Hassankhani, Abass Noor, Farouk Dako, Tessa Cook","doi":"10.1007/s10278-024-01233-4","DOIUrl":"https://doi.org/10.1007/s10278-024-01233-4","url":null,"abstract":"<p><p>Expert feedback on trainees' preliminary reports is crucial for radiologic training, but real-time feedback can be challenging due to non-contemporaneous, remote reading and increasing imaging volumes. Trainee report revisions contain valuable educational feedback, but synthesizing data from raw revisions is challenging. Generative AI models can potentially analyze these revisions and provide structured, actionable feedback. This study used the OpenAI GPT-4 Turbo API to analyze paired synthesized and open-source analogs of preliminary and finalized reports, identify discrepancies, categorize their severity and type, and suggest review topics. Expert radiologists reviewed the output by grading discrepancies, evaluating the severity and category accuracy, and suggested review topic relevance. The reproducibility of discrepancy detection and maximal discrepancy severity was also examined. The model exhibited high sensitivity, detecting significantly more discrepancies than radiologists (W = 19.0, p < 0.001) with a strong positive correlation (r = 0.778, p < 0.001). Interrater reliability for severity and type were fair (Fleiss' kappa = 0.346 and 0.340, respectively; weighted kappa = 0.622 for severity). The LLM achieved a weighted F1 score of 0.66 for severity and 0.64 for type. Generated teaching points were considered relevant in ~ 85% of cases, and relevance correlated with the maximal discrepancy severity (Spearman ρ = 0.76, p < 0.001). The reproducibility was moderate to good (ICC (2,1) = 0.690) for the number of discrepancies and substantial for maximal discrepancy severity (Fleiss' kappa = 0.718; weighted kappa = 0.94). Generative AI models can effectively identify discrepancies in report revisions and generate relevant educational feedback, offering promise for enhancing radiology training.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142006262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harnessing Deep Learning for Accurate Pathological Assessment of Brain Tumor Cell Types. 利用深度学习对脑肿瘤细胞类型进行准确的病理学评估。
Pub Date : 2024-08-16 DOI: 10.1007/s10278-024-01107-9
Chongxuan Tian, Yue Xi, Yuting Ma, Cai Chen, Cong Wu, Kun Ru, Wei Li, Miaoqing Zhao

Primary diffuse central nervous system large B-cell lymphoma (CNS-pDLBCL) and high-grade glioma (HGG) often present similarly, clinically and on imaging, making differentiation challenging. This similarity can complicate pathologists' diagnostic efforts, yet accurately distinguishing between these conditions is crucial for guiding treatment decisions. This study leverages a deep learning model to classify brain tumor pathology images, addressing the common issue of limited medical imaging data. Instead of training a convolutional neural network (CNN) from scratch, we employ a pre-trained network for extracting deep features, which are then used by a support vector machine (SVM) for classification. Our evaluation shows that the Resnet50 (TL + SVM) model achieves a 97.4% accuracy, based on tenfold cross-validation on the test set. These results highlight the synergy between deep learning and traditional diagnostics, potentially setting a new standard for accuracy and efficiency in the pathological diagnosis of brain tumors.

原发性弥漫性中枢神经系统大 B 细胞淋巴瘤(CNS-pDLBCL)和高级别胶质瘤(HGG)在临床上和影像学上的表现往往很相似,因此很难区分。这种相似性会使病理学家的诊断工作复杂化,但准确区分这些病症对于指导治疗决策至关重要。本研究利用深度学习模型对脑肿瘤病理图像进行分类,解决了医学影像数据有限这一常见问题。我们没有从头开始训练卷积神经网络(CNN),而是采用了一个预先训练好的网络来提取深度特征,然后由支持向量机(SVM)进行分类。我们的评估结果表明,基于测试集上的十倍交叉验证,Resnet50(TL + SVM)模型达到了 97.4% 的准确率。这些结果凸显了深度学习与传统诊断之间的协同作用,有可能为脑肿瘤病理诊断的准确性和效率设定一个新标准。
{"title":"Harnessing Deep Learning for Accurate Pathological Assessment of Brain Tumor Cell Types.","authors":"Chongxuan Tian, Yue Xi, Yuting Ma, Cai Chen, Cong Wu, Kun Ru, Wei Li, Miaoqing Zhao","doi":"10.1007/s10278-024-01107-9","DOIUrl":"https://doi.org/10.1007/s10278-024-01107-9","url":null,"abstract":"<p><p>Primary diffuse central nervous system large B-cell lymphoma (CNS-pDLBCL) and high-grade glioma (HGG) often present similarly, clinically and on imaging, making differentiation challenging. This similarity can complicate pathologists' diagnostic efforts, yet accurately distinguishing between these conditions is crucial for guiding treatment decisions. This study leverages a deep learning model to classify brain tumor pathology images, addressing the common issue of limited medical imaging data. Instead of training a convolutional neural network (CNN) from scratch, we employ a pre-trained network for extracting deep features, which are then used by a support vector machine (SVM) for classification. Our evaluation shows that the Resnet50 (TL + SVM) model achieves a 97.4% accuracy, based on tenfold cross-validation on the test set. These results highlight the synergy between deep learning and traditional diagnostics, potentially setting a new standard for accuracy and efficiency in the pathological diagnosis of brain tumors.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EAAC-Net: An Efficient Adaptive Attention and Convolution Fusion Network for Skin Lesion Segmentation. EAAC-Net:用于皮损分割的高效自适应注意力和卷积融合网络
Pub Date : 2024-08-15 DOI: 10.1007/s10278-024-01223-6
Chao Fan, Zhentong Zhu, Bincheng Peng, Zhihui Xuan, Xinru Zhu

Accurate segmentation of skin lesions in dermoscopic images is of key importance for quantitative analysis of melanoma. Although existing medical image segmentation methods significantly improve skin lesion segmentation, they still have limitations in extracting local features with global information, do not handle challenging lesions well, and usually have a large number of parameters and high computational complexity. To address these issues, this paper proposes an efficient adaptive attention and convolutional fusion network for skin lesion segmentation (EAAC-Net). We designed two parallel encoders, where the efficient adaptive attention feature extraction module (EAAM) adaptively establishes global spatial dependence and global channel dependence by constructing the adjacency matrix of the directed graph and can adaptively filter out the least relevant tokens at the coarse-grained region level, thus reducing the computational complexity of the self-attention mechanism. The efficient multiscale attention-based convolution module (EMA⋅C) utilizes multiscale attention for cross-space learning of local features extracted from the convolutional layer to enhance the representation of richly detailed local features. In addition, we designed a reverse attention feature fusion module (RAFM) to enhance the effective boundary information gradually. To validate the performance of our proposed network, we compared it with other methods on ISIC 2016, ISIC 2018, and PH2 public datasets, and the experimental results show that EAAC-Net has superior segmentation performance under commonly used evaluation metrics.

准确分割皮肤镜图像中的皮损对于黑色素瘤的定量分析至关重要。虽然现有的医学图像分割方法大大提高了皮损分割的效率,但它们在提取局部特征与全局信息方面仍存在局限性,不能很好地处理具有挑战性的皮损,而且通常参数较多,计算复杂度较高。针对这些问题,本文提出了一种用于皮损分割的高效自适应注意力和卷积融合网络(EAAC-Net)。我们设计了两个并行编码器,其中高效自适应注意力特征提取模块(EAAM)通过构建有向图的邻接矩阵,自适应地建立全局空间依赖性和全局通道依赖性,并能在粗粒度区域级别自适应地过滤掉相关性最小的标记,从而降低自注意力机制的计算复杂度。基于多尺度注意力的高效卷积模块(EMA⋅C)利用多尺度注意力对从卷积层提取的局部特征进行跨空间学习,以增强对细节丰富的局部特征的表示。此外,我们还设计了反向注意力特征融合模块(RAFM),以逐步增强有效的边界信息。为了验证我们提出的网络的性能,我们在 ISIC 2016、ISIC 2018 和 PH2 公共数据集上将其与其他方法进行了比较,实验结果表明,在常用的评估指标下,EAAC-Net 具有更优越的分割性能。
{"title":"EAAC-Net: An Efficient Adaptive Attention and Convolution Fusion Network for Skin Lesion Segmentation.","authors":"Chao Fan, Zhentong Zhu, Bincheng Peng, Zhihui Xuan, Xinru Zhu","doi":"10.1007/s10278-024-01223-6","DOIUrl":"https://doi.org/10.1007/s10278-024-01223-6","url":null,"abstract":"<p><p>Accurate segmentation of skin lesions in dermoscopic images is of key importance for quantitative analysis of melanoma. Although existing medical image segmentation methods significantly improve skin lesion segmentation, they still have limitations in extracting local features with global information, do not handle challenging lesions well, and usually have a large number of parameters and high computational complexity. To address these issues, this paper proposes an efficient adaptive attention and convolutional fusion network for skin lesion segmentation (EAAC-Net). We designed two parallel encoders, where the efficient adaptive attention feature extraction module (EAAM) adaptively establishes global spatial dependence and global channel dependence by constructing the adjacency matrix of the directed graph and can adaptively filter out the least relevant tokens at the coarse-grained region level, thus reducing the computational complexity of the self-attention mechanism. The efficient multiscale attention-based convolution module (EMA⋅C) utilizes multiscale attention for cross-space learning of local features extracted from the convolutional layer to enhance the representation of richly detailed local features. In addition, we designed a reverse attention feature fusion module (RAFM) to enhance the effective boundary information gradually. To validate the performance of our proposed network, we compared it with other methods on ISIC 2016, ISIC 2018, and PH<sup>2</sup> public datasets, and the experimental results show that EAAC-Net has superior segmentation performance under commonly used evaluation metrics.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictive Study of Machine Learning-Based Multiparametric MRI Radiomics Nomogram for Perineural Invasion in Rectal Cancer: A Pilot Study. 基于机器学习的多参数磁共振成像放射组学提名图对直肠癌神经周围侵犯的预测研究:一项试点研究
Pub Date : 2024-08-15 DOI: 10.1007/s10278-024-01231-6
Yueyan Wang, Aiqi Chen, Kai Wang, Yihui Zhao, Xiaomeng Du, Yan Chen, Lei Lv, Yimin Huang, Yichuan Ma

This study aimed to establish and validate the efficacy of a nomogram model, synthesized through the integration of multi-parametric magnetic resonance radiomics and clinical risk factors, for forecasting perineural invasion in rectal cancer. We retrospectively collected data from 108 patients with pathologically confirmed rectal adenocarcinoma who underwent preoperative multiparametric MRI at the First Affiliated Hospital of Bengbu Medical College between April 2019 and August 2023. This dataset was subsequently divided into training and validation sets following a ratio of 7:3. Both univariate and multivariate logistic regression analyses were implemented to identify independent clinical risk factors associated with perineural invasion (PNI) in rectal cancer. We manually delineated the region of interest (ROI) layer-by-layer on T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI) sequences and extracted the image features. Five machine learning algorithms were used to construct radiomics model with the features selected by least absolute shrinkage and selection operator (LASSO) method. The optimal radiomics model was then selected and combined with clinical features to formulate a nomogram model. The model performance was evaluated using receiver operating characteristic (ROC) curve analysis, and its clinical value was assessed via decision curve analysis (DCA). Our final selection comprised 10 optimal radiological features and the SVM model showcased superior predictive efficiency and robustness among the five classifiers. The area under the curve (AUC) values of the nomogram model were 0.945 (0.899, 0.991) and 0.846 (0.703, 0.99) for the training and validation sets, respectively. The nomogram model developed in this study exhibited excellent predictive performance in foretelling PNI of rectal cancer, thereby offering valuable guidance for clinical decision-making. The nomogram could predict the perineural invasion status of rectal cancer in early stage.

本研究旨在建立和验证一个通过整合多参数磁共振放射组学和临床风险因素而合成的提名图模型,用于预测直肠癌的神经周围侵犯。我们回顾性地收集了2019年4月至2023年8月期间在蚌埠医学院第一附属医院接受术前多参数磁共振成像检查的108例病理确诊直肠腺癌患者的数据。该数据集随后按照 7:3 的比例分为训练集和验证集。通过单变量和多变量逻辑回归分析来确定与直肠癌会阴部侵犯(PNI)相关的独立临床风险因素。我们在 T2 加权成像(T2WI)和弥散加权成像(DWI)序列上逐层人工划分感兴趣区(ROI),并提取图像特征。使用五种机器学习算法构建放射组学模型,并通过最小绝对收缩和选择算子(LASSO)方法选择特征。然后选出最佳放射组学模型,并将其与临床特征相结合,形成一个提名图模型。模型的性能通过接收者操作特征曲线(ROC)分析进行评估,其临床价值则通过决策曲线分析(DCA)进行评估。我们最终选择了 10 个最佳放射学特征,在五个分类器中,SVM 模型显示出卓越的预测效率和稳健性。在训练集和验证集上,提名图模型的曲线下面积(AUC)值分别为 0.945 (0.899, 0.991) 和 0.846 (0.703, 0.99)。本研究建立的提名图模型在预测直肠癌的PNI方面表现出色,从而为临床决策提供了有价值的指导。该提名图可以预测早期直肠癌的会阴部浸润状况。
{"title":"Predictive Study of Machine Learning-Based Multiparametric MRI Radiomics Nomogram for Perineural Invasion in Rectal Cancer: A Pilot Study.","authors":"Yueyan Wang, Aiqi Chen, Kai Wang, Yihui Zhao, Xiaomeng Du, Yan Chen, Lei Lv, Yimin Huang, Yichuan Ma","doi":"10.1007/s10278-024-01231-6","DOIUrl":"https://doi.org/10.1007/s10278-024-01231-6","url":null,"abstract":"<p><p>This study aimed to establish and validate the efficacy of a nomogram model, synthesized through the integration of multi-parametric magnetic resonance radiomics and clinical risk factors, for forecasting perineural invasion in rectal cancer. We retrospectively collected data from 108 patients with pathologically confirmed rectal adenocarcinoma who underwent preoperative multiparametric MRI at the First Affiliated Hospital of Bengbu Medical College between April 2019 and August 2023. This dataset was subsequently divided into training and validation sets following a ratio of 7:3. Both univariate and multivariate logistic regression analyses were implemented to identify independent clinical risk factors associated with perineural invasion (PNI) in rectal cancer. We manually delineated the region of interest (ROI) layer-by-layer on T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI) sequences and extracted the image features. Five machine learning algorithms were used to construct radiomics model with the features selected by least absolute shrinkage and selection operator (LASSO) method. The optimal radiomics model was then selected and combined with clinical features to formulate a nomogram model. The model performance was evaluated using receiver operating characteristic (ROC) curve analysis, and its clinical value was assessed via decision curve analysis (DCA). Our final selection comprised 10 optimal radiological features and the SVM model showcased superior predictive efficiency and robustness among the five classifiers. The area under the curve (AUC) values of the nomogram model were 0.945 (0.899, 0.991) and 0.846 (0.703, 0.99) for the training and validation sets, respectively. The nomogram model developed in this study exhibited excellent predictive performance in foretelling PNI of rectal cancer, thereby offering valuable guidance for clinical decision-making. The nomogram could predict the perineural invasion status of rectal cancer in early stage.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive Multi-scale Fusion: Advancing Brain Tumor Detection Through Trans-IMSM Model. 交互式多尺度融合:通过跨 IMSM 模型推进脑肿瘤检测。
Pub Date : 2024-08-15 DOI: 10.1007/s10278-024-01222-7
Vasanthi Durairaj, Palani Uthirapathy

Multi-modal medical image (MI) fusion assists in generating collaboration images collecting complement features through the distinct images of several conditions. The images help physicians to diagnose disease accurately. Hence, this research proposes a novel multi-modal MI fusion modal named guided filter-based interactive multi-scale and multi-modal transformer (Trans-IMSM) fusion approach to develop high-quality computed tomography-magnetic resonance imaging (CT-MRI) fused images for brain tumor detection. This research utilizes the CT and MRI brain scan dataset to gather the input CT and MRI images. At first, the data preprocessing is carried out to preprocess these input images to improve the image quality and generalization ability for further analysis. Then, these preprocessed CT and MRI are decomposed into detail and base components utilizing the guided filter-based MI decomposition approach. This approach involves two phases: such as acquiring the image guidance and decomposing the images utilizing the guided filter. A canny operator is employed to acquire the image guidance comprising robust edge for CT and MRI images, and the guided filter is applied to decompose the guidance and preprocessed images. Then, by applying the Trans-IMSM model, fuse the detail components, while a weighting approach is used for the base components. The fused detail and base components are subsequently processed through a gated fusion and reconstruction network, and the final fused images for brain tumor detection are generated. Extensive tests are carried out to compute the Trans-IMSM method's efficacy. The evaluation results demonstrated the robustness and effectiveness, achieving an accuracy of 98.64% and an SSIM of 0.94.

多模态医学图像(MI)融合有助于生成协作图像,通过几种病症的不同图像收集互补特征。这些图像有助于医生准确诊断疾病。因此,本研究提出了一种新颖的多模态医学图像融合模式,命名为基于引导滤波器的交互式多尺度多模态变换器(Trans-ISM)融合方法,以开发用于脑肿瘤检测的高质量计算机断层扫描-磁共振成像(CT-MRI)融合图像。这项研究利用 CT 和 MRI 脑扫描数据集收集输入的 CT 和 MRI 图像。首先,对这些输入图像进行数据预处理,以提高图像质量和进一步分析的概括能力。然后,利用基于引导滤波器的 MI 分解方法,将这些预处理后的 CT 和 MRI 分解为细节和基本组件。该方法包括两个阶段:获取图像引导和利用引导滤波器分解图像。在获取由 CT 和 MRI 图像的稳健边缘组成的图像引导时,使用了 Canny 运算器,并应用引导滤波器对引导图像和预处理图像进行分解。然后,通过应用 Trans-IMSM 模型,对细节部分进行融合,同时对基本部分采用加权方法。融合后的细节成分和基础成分随后通过门控融合与重建网络进行处理,最终生成用于脑肿瘤检测的融合图像。为了计算 Trans-IMSM 方法的有效性,我们进行了广泛的测试。评估结果证明了该方法的稳健性和有效性,准确率达到 98.64%,SSIM 为 0.94。
{"title":"Interactive Multi-scale Fusion: Advancing Brain Tumor Detection Through Trans-IMSM Model.","authors":"Vasanthi Durairaj, Palani Uthirapathy","doi":"10.1007/s10278-024-01222-7","DOIUrl":"https://doi.org/10.1007/s10278-024-01222-7","url":null,"abstract":"<p><p>Multi-modal medical image (MI) fusion assists in generating collaboration images collecting complement features through the distinct images of several conditions. The images help physicians to diagnose disease accurately. Hence, this research proposes a novel multi-modal MI fusion modal named guided filter-based interactive multi-scale and multi-modal transformer (Trans-IMSM) fusion approach to develop high-quality computed tomography-magnetic resonance imaging (CT-MRI) fused images for brain tumor detection. This research utilizes the CT and MRI brain scan dataset to gather the input CT and MRI images. At first, the data preprocessing is carried out to preprocess these input images to improve the image quality and generalization ability for further analysis. Then, these preprocessed CT and MRI are decomposed into detail and base components utilizing the guided filter-based MI decomposition approach. This approach involves two phases: such as acquiring the image guidance and decomposing the images utilizing the guided filter. A canny operator is employed to acquire the image guidance comprising robust edge for CT and MRI images, and the guided filter is applied to decompose the guidance and preprocessed images. Then, by applying the Trans-IMSM model, fuse the detail components, while a weighting approach is used for the base components. The fused detail and base components are subsequently processed through a gated fusion and reconstruction network, and the final fused images for brain tumor detection are generated. Extensive tests are carried out to compute the Trans-IMSM method's efficacy. The evaluation results demonstrated the robustness and effectiveness, achieving an accuracy of 98.64% and an SSIM of 0.94.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of imaging informatics in medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1