首页 > 最新文献

Journal of imaging informatics in medicine最新文献

英文 中文
Dual Energy CT for Deep Learning-Based Segmentation and Volumetric Estimation of Early Ischemic Infarcts. 基于深度学习的早期缺血性梗塞分割和容积估算的双能量 CT。
Pub Date : 2024-10-09 DOI: 10.1007/s10278-024-01294-5
Peter Kamel, Mazhar Khalid, Rachel Steger, Adway Kanhere, Pranav Kulkarni, Vishwa Parekh, Paul H Yi, Dheeraj Gandhi, Uttam Bodanapally

Ischemic changes are not visible on non-contrast head CT until several hours after infarction, though deep convolutional neural networks have shown promise in the detection of subtle imaging findings. This study aims to assess if dual-energy CT (DECT) acquisition can improve early infarct visibility for machine learning. The retrospective dataset consisted of 330 DECTs acquired up to 48 h prior to confirmation of a DWI positive infarct on MRI between 2016 and 2022. Infarct segmentation maps were generated from the MRI and co-registered to the CT to serve as ground truth for segmentation. A self-configuring 3D nnU-Net was trained for segmentation on (1) standard 120 kV mixed-images (2) 190 keV virtual monochromatic images and (3) 120 kV + 190 keV images as dual channel inputs. Algorithm performance was assessed with Dice scores with paired t-tests on a test set. Global aggregate Dice scores were 0.616, 0.645, and 0.665 for standard 120 kV images, 190 keV, and combined channel inputs respectively. Differences in overall Dice scores were statistically significant with highest performance for combined channel inputs (p < 0.01). Small but statistically significant differences were observed for infarcts between 6 and 12 h from last-known-well with higher performance for larger infarcts. Volumetric accuracy trended higher with combined inputs but differences were not statistically significant (p = 0.07). Supplementation of standard head CT images with dual-energy data provides earlier and more accurate segmentation of infarcts for machine learning particularly between 6 and 12 h after last-known-well.

虽然深度卷积神经网络在检测微妙的成像结果方面已显示出前景,但直到梗死后数小时,非对比度头部 CT 上的缺血性变化才会显现出来。本研究旨在评估双能 CT(DECT)采集是否能提高机器学习的早期梗死能见度。回顾性数据集包括2016年至2022年间在MRI确认DWI阳性梗死前48小时内采集的330个DECT。梗死分割图由核磁共振成像生成,并与CT共同注册,作为分割的基本真相。对自配置三维 nnU-Net 进行了训练,以便在(1)标准 120 kV 混合图像(2)190 keV 虚拟单色图像和(3)作为双通道输入的 120 kV + 190 keV 图像上进行分割。算法性能通过 Dice 分数和测试集上的配对 t 检验进行评估。标准 120 kV 图像、190 keV 图像和组合通道输入的总体 Dice 总分分别为 0.616、0.645 和 0.665。Dice 总分的差异具有显著的统计学意义,其中组合通道输入的性能最高(p
{"title":"Dual Energy CT for Deep Learning-Based Segmentation and Volumetric Estimation of Early Ischemic Infarcts.","authors":"Peter Kamel, Mazhar Khalid, Rachel Steger, Adway Kanhere, Pranav Kulkarni, Vishwa Parekh, Paul H Yi, Dheeraj Gandhi, Uttam Bodanapally","doi":"10.1007/s10278-024-01294-5","DOIUrl":"https://doi.org/10.1007/s10278-024-01294-5","url":null,"abstract":"<p><p>Ischemic changes are not visible on non-contrast head CT until several hours after infarction, though deep convolutional neural networks have shown promise in the detection of subtle imaging findings. This study aims to assess if dual-energy CT (DECT) acquisition can improve early infarct visibility for machine learning. The retrospective dataset consisted of 330 DECTs acquired up to 48 h prior to confirmation of a DWI positive infarct on MRI between 2016 and 2022. Infarct segmentation maps were generated from the MRI and co-registered to the CT to serve as ground truth for segmentation. A self-configuring 3D nnU-Net was trained for segmentation on (1) standard 120 kV mixed-images (2) 190 keV virtual monochromatic images and (3) 120 kV + 190 keV images as dual channel inputs. Algorithm performance was assessed with Dice scores with paired t-tests on a test set. Global aggregate Dice scores were 0.616, 0.645, and 0.665 for standard 120 kV images, 190 keV, and combined channel inputs respectively. Differences in overall Dice scores were statistically significant with highest performance for combined channel inputs (p < 0.01). Small but statistically significant differences were observed for infarcts between 6 and 12 h from last-known-well with higher performance for larger infarcts. Volumetric accuracy trended higher with combined inputs but differences were not statistically significant (p = 0.07). Supplementation of standard head CT images with dual-energy data provides earlier and more accurate segmentation of infarcts for machine learning particularly between 6 and 12 h after last-known-well.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142396609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empowering Women in Imaging Informatics: Confronting Imposter Syndrome, Addressing Microaggressions, and Striving for Work-Life Harmony. 增强图像信息学领域女性的能力:面对冒名顶替综合症、解决微观诽谤以及努力实现工作与生活的和谐。
Pub Date : 2024-10-09 DOI: 10.1007/s10278-024-01285-6
Mana Moassefi, Nikki Fennell, Mindy Yang, Jennifer B Gunter, Teri M Sippel Schmit, Tessa S Cook

For the past 6 years, the Society for Imaging Informatics in Medicine (SIIM) annual meeting has provided a forum for women in imaging informatics to discuss the unique challenges they face. These sessions have evolved into a platform for understanding, sharing experiences, and developing practical strategies. The 2023 session was organized into three focus groups devoted to discussing imposter syndrome, workplace microaggressions, and work-life balance. This paper summarizes these discussions and highlights the significant themes and narratives that emerged. We aim to contribute to the larger conversation on gender equity in the informatics field, emphasizing the importance of understanding and addressing the challenges faced by women in informatics. By documenting these sessions, we seek to inspire actionable change towards a more inclusive and equitable future for everyone in imaging informatics.

在过去的 6 年中,医学影像信息学学会(SIIM)年会为影像信息学领域的女性提供了一个讨论她们所面临的独特挑战的论坛。这些会议已发展成为了解情况、分享经验和制定实用战略的平台。2023 年年会分为三个焦点小组,专门讨论冒名顶替综合症、职场微言和工作与生活的平衡。本文对这些讨论进行了总结,并强调了出现的重要主题和叙述。我们旨在为信息学领域更广泛的性别平等对话做出贡献,强调了解和应对信息学领域女性所面临挑战的重要性。通过记录这些会议,我们力求激发可操作的变革,为影像信息学领域的每个人创造一个更加包容和公平的未来。
{"title":"Empowering Women in Imaging Informatics: Confronting Imposter Syndrome, Addressing Microaggressions, and Striving for Work-Life Harmony.","authors":"Mana Moassefi, Nikki Fennell, Mindy Yang, Jennifer B Gunter, Teri M Sippel Schmit, Tessa S Cook","doi":"10.1007/s10278-024-01285-6","DOIUrl":"https://doi.org/10.1007/s10278-024-01285-6","url":null,"abstract":"<p><p>For the past 6 years, the Society for Imaging Informatics in Medicine (SIIM) annual meeting has provided a forum for women in imaging informatics to discuss the unique challenges they face. These sessions have evolved into a platform for understanding, sharing experiences, and developing practical strategies. The 2023 session was organized into three focus groups devoted to discussing imposter syndrome, workplace microaggressions, and work-life balance. This paper summarizes these discussions and highlights the significant themes and narratives that emerged. We aim to contribute to the larger conversation on gender equity in the informatics field, emphasizing the importance of understanding and addressing the challenges faced by women in informatics. By documenting these sessions, we seek to inspire actionable change towards a more inclusive and equitable future for everyone in imaging informatics.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142396610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Conformal Supervision: Leveraging Intermediate Features for Robust Uncertainty Quantification. 深度共形监督:利用中间特征进行稳健的不确定性量化。
Pub Date : 2024-10-07 DOI: 10.1007/s10278-024-01286-5
Amir M Vahdani, Shahriar Faghani

Trustworthiness is crucial for artificial intelligence (AI) models in clinical settings, and a fundamental aspect of trustworthy AI is uncertainty quantification (UQ). Conformal prediction as a robust uncertainty quantification (UQ) framework has been receiving increasing attention as a valuable tool in improving model trustworthiness. An area of active research is the method of non-conformity score calculation for conformal prediction. We propose deep conformal supervision (DCS), which leverages the intermediate outputs of deep supervision for non-conformity score calculation, via weighted averaging based on the inverse of mean calibration error for each stage. We benchmarked our method on two publicly available datasets focused on medical image classification: a pneumonia chest radiography dataset and a preprocessed version of the 2019 RSNA Intracranial Hemorrhage dataset. Our method achieved mean coverage errors of 16e-4 (CI: 1e-4, 41e-4) and 5e-4 (CI: 1e-4, 10e-4) compared to baseline mean coverage errors of 28e-4 (CI: 2e-4, 64e-4) and 21e-4 (CI: 8e-4, 3e-4) on the two datasets, respectively (p < 0.001 on both datasets). Based on our findings, the baseline results of conformal prediction already exhibit small coverage errors. However, our method shows a significant improvement on coverage error, particularly noticeable in scenarios involving smaller datasets or when considering smaller acceptable error levels, which are crucial in developing UQ frameworks for healthcare AI applications.

可信性对于临床环境中的人工智能(AI)模型至关重要,而可信性人工智能的一个基本方面就是不确定性量化(UQ)。共形预测作为一种稳健的不确定性量化(UQ)框架,作为提高模型可信度的重要工具,受到越来越多的关注。保形预测的不保形得分计算方法是一个活跃的研究领域。我们提出了深度保形监督(DCS),它利用深度监督的中间输出,通过基于每个阶段平均校准误差倒数的加权平均来计算不符合得分。我们在两个公开可用的医学图像分类数据集上对我们的方法进行了基准测试:肺炎胸片数据集和 2019 RSNA 颅内出血数据集的预处理版本。我们的方法在这两个数据集上的平均覆盖误差分别为 16e-4 (CI: 1e-4, 41e-4) 和 5e-4 (CI: 1e-4, 10e-4),而基线平均覆盖误差分别为 28e-4 (CI: 2e-4, 64e-4) 和 21e-4 (CI: 8e-4, 3e-4) (p
{"title":"Deep Conformal Supervision: Leveraging Intermediate Features for Robust Uncertainty Quantification.","authors":"Amir M Vahdani, Shahriar Faghani","doi":"10.1007/s10278-024-01286-5","DOIUrl":"https://doi.org/10.1007/s10278-024-01286-5","url":null,"abstract":"<p><p>Trustworthiness is crucial for artificial intelligence (AI) models in clinical settings, and a fundamental aspect of trustworthy AI is uncertainty quantification (UQ). Conformal prediction as a robust uncertainty quantification (UQ) framework has been receiving increasing attention as a valuable tool in improving model trustworthiness. An area of active research is the method of non-conformity score calculation for conformal prediction. We propose deep conformal supervision (DCS), which leverages the intermediate outputs of deep supervision for non-conformity score calculation, via weighted averaging based on the inverse of mean calibration error for each stage. We benchmarked our method on two publicly available datasets focused on medical image classification: a pneumonia chest radiography dataset and a preprocessed version of the 2019 RSNA Intracranial Hemorrhage dataset. Our method achieved mean coverage errors of 16e-4 (CI: 1e-4, 41e-4) and 5e-4 (CI: 1e-4, 10e-4) compared to baseline mean coverage errors of 28e-4 (CI: 2e-4, 64e-4) and 21e-4 (CI: 8e-4, 3e-4) on the two datasets, respectively (p < 0.001 on both datasets). Based on our findings, the baseline results of conformal prediction already exhibit small coverage errors. However, our method shows a significant improvement on coverage error, particularly noticeable in scenarios involving smaller datasets or when considering smaller acceptable error levels, which are crucial in developing UQ frameworks for healthcare AI applications.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142396608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Ensemble Models and Follow-up Data for Accurate Prediction of mRS Scores from Radiomic Features of DSC-PWI Images. 利用集合模型和随访数据,从 DSC-PWI 图像的放射学特征准确预测 mRS 评分。
Pub Date : 2024-10-04 DOI: 10.1007/s10278-024-01280-x
Mazen M Yassin, Asim Zaman, Jiaxi Lu, Huihui Yang, Anbo Cao, Haseeb Hassan, Taiyu Han, Xiaoqiang Miao, Yongkang Shi, Yingwei Guo, Yu Luo, Yan Kang

Predicting long-term clinical outcomes based on the early DSC PWI MRI scan is valuable for prognostication, resource management, clinical trials, and patient expectations. Current methods require subjective decisions about which imaging features to assess and may require time-consuming postprocessing. This study's goal was to predict multilabel 90-day modified Rankin Scale (mRS) score in acute ischemic stroke patients by combining ensemble models and different configurations of radiomic features generated from Dynamic susceptibility contrast perfusion-weighted imaging. In Follow-up studies, a total of 70 acute ischemic stroke (AIS) patients underwent magnetic resonance imaging within 24 hours poststroke and had a follow-up scan. In the single study, 150 DSC PWI Image scans for AIS patients. The DRF are extracted from DSC-PWI Scans. Then Lasso algorithm is applied for feature selection, then new features are generated from initial and follow-up scans. Then we applied different ensemble models to classify between three classes normal outcome (0, 1 mRS score), moderate outcome (2,3,4 mRS score), and severe outcome (5,6 mRS score). ANOVA and post-hoc Tukey HSD tests confirmed significant differences in model style performance across various studies and classification techniques. Stacking models consistently on average outperformed others, achieving an Accuracy of 0.68 ± 0.15, Precision of 0.68 ± 0.17, Recall of 0.65 ± 0.14, and F1 score of 0.63 ± 0.15 in the follow-up time study. Techniques like Bo_Smote showed significantly higher recall and F1 scores, highlighting their robustness and effectiveness in handling imbalanced data. Ensemble models, particularly Bagging and Stacking, demonstrated superior performance, achieving nearly 0.93 in Accuracy, 0.95 in Precision, 0.94 in Recall, and 0.94 in F1 metrics in follow-up conditions, significantly outperforming single models. Ensemble models based on radiomics generated from combining Initial and follow-up scans can be used to predict multilabel 90-day stroke outcomes with reduced subjectivity and user burden.

根据早期 DSC PWI MRI 扫描预测长期临床结果对预后、资源管理、临床试验和患者期望都很有价值。目前的方法需要主观决定评估哪些成像特征,而且可能需要耗时的后处理。本研究的目标是通过组合模型和动态感性对比灌注加权成像生成的不同放射学特征配置,预测急性缺血性脑卒中患者的多标签 90 天改良 Rankin 量表(mRS)评分。在随访研究中,共有 70 名急性缺血性中风(AIS)患者在中风后 24 小时内接受了磁共振成像,并进行了随访扫描。在单项研究中,为 AIS 患者进行了 150 次 DSC PWI 图像扫描。从 DSC-PWI 扫描图像中提取 DRF。然后应用 Lasso 算法进行特征选择,再从初始扫描和随访扫描中生成新特征。然后,我们应用不同的集合模型对正常结果(0、1 mRS 评分)、中度结果(2、3、4 mRS 评分)和重度结果(5、6 mRS 评分)进行分类。方差分析和事后 Tukey HSD 检验证实,不同研究和分类技术的模型风格性能存在显著差异。在随访时间研究中,堆叠模型的平均表现始终优于其他模型,准确率为 0.68 ± 0.15,精确率为 0.68 ± 0.17,召回率为 0.65 ± 0.14,F1 分数为 0.63 ± 0.15。Bo_Smote 等技术的召回率和 F1 分数明显更高,这突出表明了它们在处理不平衡数据时的稳健性和有效性。集合模型,特别是 Bagging 和 Stacking,表现出卓越的性能,在随访条件下的准确度接近 0.93,精确度接近 0.95,召回率接近 0.94,F1 指标接近 0.94,明显优于单一模型。基于结合初始扫描和随访扫描生成的放射组学的集合模型可用于预测多标签 90 天中风预后,同时减少主观性和用户负担。
{"title":"Leveraging Ensemble Models and Follow-up Data for Accurate Prediction of mRS Scores from Radiomic Features of DSC-PWI Images.","authors":"Mazen M Yassin, Asim Zaman, Jiaxi Lu, Huihui Yang, Anbo Cao, Haseeb Hassan, Taiyu Han, Xiaoqiang Miao, Yongkang Shi, Yingwei Guo, Yu Luo, Yan Kang","doi":"10.1007/s10278-024-01280-x","DOIUrl":"https://doi.org/10.1007/s10278-024-01280-x","url":null,"abstract":"<p><p>Predicting long-term clinical outcomes based on the early DSC PWI MRI scan is valuable for prognostication, resource management, clinical trials, and patient expectations. Current methods require subjective decisions about which imaging features to assess and may require time-consuming postprocessing. This study's goal was to predict multilabel 90-day modified Rankin Scale (mRS) score in acute ischemic stroke patients by combining ensemble models and different configurations of radiomic features generated from Dynamic susceptibility contrast perfusion-weighted imaging. In Follow-up studies, a total of 70 acute ischemic stroke (AIS) patients underwent magnetic resonance imaging within 24 hours poststroke and had a follow-up scan. In the single study, 150 DSC PWI Image scans for AIS patients. The DRF are extracted from DSC-PWI Scans. Then Lasso algorithm is applied for feature selection, then new features are generated from initial and follow-up scans. Then we applied different ensemble models to classify between three classes normal outcome (0, 1 mRS score), moderate outcome (2,3,4 mRS score), and severe outcome (5,6 mRS score). ANOVA and post-hoc Tukey HSD tests confirmed significant differences in model style performance across various studies and classification techniques. Stacking models consistently on average outperformed others, achieving an Accuracy of 0.68 ± 0.15, Precision of 0.68 ± 0.17, Recall of 0.65 ± 0.14, and F1 score of 0.63 ± 0.15 in the follow-up time study. Techniques like Bo_Smote showed significantly higher recall and F1 scores, highlighting their robustness and effectiveness in handling imbalanced data. Ensemble models, particularly Bagging and Stacking, demonstrated superior performance, achieving nearly 0.93 in Accuracy, 0.95 in Precision, 0.94 in Recall, and 0.94 in F1 metrics in follow-up conditions, significantly outperforming single models. Ensemble models based on radiomics generated from combining Initial and follow-up scans can be used to predict multilabel 90-day stroke outcomes with reduced subjectivity and user burden.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142376505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Neural Architecture Search for Cardiac Amyloidosis Classification from [18F]-Florbetaben PET Images. 从 [18F]-Florbetaben PET 图像中自动搜索用于心脏淀粉样变性分类的神经架构。
Pub Date : 2024-10-02 DOI: 10.1007/s10278-024-01275-8
Filippo Bargagna, Donato Zigrino, Lisa Anita De Santi, Dario Genovesi, Michele Scipioni, Brunella Favilli, Giuseppe Vergaro, Michele Emdin, Assuero Giorgetti, Vincenzo Positano, Maria Filomena Santarelli

Medical image classification using convolutional neural networks (CNNs) is promising but often requires extensive manual tuning for optimal model definition. Neural architecture search (NAS) automates this process, reducing human intervention significantly. This study applies NAS to [18F]-Florbetaben PET cardiac images for classifying cardiac amyloidosis (CA) sub-types (amyloid light chain (AL) and transthyretin amyloid (ATTR)) and controls. Following data preprocessing and augmentation, an evolutionary cell-based NAS approach with a fixed network macro-structure is employed, automatically deriving cells' micro-structure. The algorithm is executed five times, evaluating 100 mutating architectures per run on an augmented dataset of 4048 images (originally 597), totaling 5000 architectures evaluated. The best network (NAS-Net) achieves 76.95% overall accuracy. K-fold analysis yields mean ± SD percentages of sensitivity, specificity, and accuracy on the test dataset: AL subjects (98.7 ± 2.9, 99.3 ± 1.1, 99.7 ± 0.7), ATTR-CA subjects (93.3 ± 7.8, 78.0 ± 2.9, 70.9 ± 3.7), and controls (35.8 ± 14.6, 77.1 ± 2.0, 96.7 ± 4.4). NAS-derived network performance rivals manually determined networks in the literature while using fewer parameters, validating its automatic approach's efficacy.

使用卷积神经网络(CNN)进行医学图像分类很有前景,但往往需要大量的人工调整才能获得最佳模型定义。神经架构搜索(NAS)可将这一过程自动化,大大减少人工干预。本研究将 NAS 应用于 [18F]-Florbetaben PET 心脏图像,以对心脏淀粉样变性(CA)亚型(淀粉样轻链(AL)和转甲状腺素淀粉样(ATTR))和对照组进行分类。在对数据进行预处理和增强后,采用了一种基于细胞的 NAS 进化方法,该方法具有固定的网络宏观结构,可自动推导出细胞的微观结构。该算法被执行了五次,每次运行在由 4048 张图像(最初为 597 张)组成的增强数据集上评估 100 个突变架构,总共评估了 5000 个架构。最佳网络(NAS-Net)的总体准确率达到 76.95%。K-fold 分析得出了测试数据集上灵敏度、特异性和准确度的平均 ± SD 百分比:AL受试者(98.7 ± 2.9、99.3 ± 1.1、99.7 ± 0.7)、ATTR-CA受试者(93.3 ± 7.8、78.0 ± 2.9、70.9 ± 3.7)和对照组(35.8 ± 14.6、77.1 ± 2.0、96.7 ± 4.4)。NAS 导出的网络性能可与文献中人工确定的网络相媲美,同时使用的参数更少,验证了其自动方法的有效性。
{"title":"Automated Neural Architecture Search for Cardiac Amyloidosis Classification from [18F]-Florbetaben PET Images.","authors":"Filippo Bargagna, Donato Zigrino, Lisa Anita De Santi, Dario Genovesi, Michele Scipioni, Brunella Favilli, Giuseppe Vergaro, Michele Emdin, Assuero Giorgetti, Vincenzo Positano, Maria Filomena Santarelli","doi":"10.1007/s10278-024-01275-8","DOIUrl":"https://doi.org/10.1007/s10278-024-01275-8","url":null,"abstract":"<p><p>Medical image classification using convolutional neural networks (CNNs) is promising but often requires extensive manual tuning for optimal model definition. Neural architecture search (NAS) automates this process, reducing human intervention significantly. This study applies NAS to [18F]-Florbetaben PET cardiac images for classifying cardiac amyloidosis (CA) sub-types (amyloid light chain (AL) and transthyretin amyloid (ATTR)) and controls. Following data preprocessing and augmentation, an evolutionary cell-based NAS approach with a fixed network macro-structure is employed, automatically deriving cells' micro-structure. The algorithm is executed five times, evaluating 100 mutating architectures per run on an augmented dataset of 4048 images (originally 597), totaling 5000 architectures evaluated. The best network (NAS-Net) achieves 76.95% overall accuracy. K-fold analysis yields mean ± SD percentages of sensitivity, specificity, and accuracy on the test dataset: AL subjects (98.7 ± 2.9, 99.3 ± 1.1, 99.7 ± 0.7), ATTR-CA subjects (93.3 ± 7.8, 78.0 ± 2.9, 70.9 ± 3.7), and controls (35.8 ± 14.6, 77.1 ± 2.0, 96.7 ± 4.4). NAS-derived network performance rivals manually determined networks in the literature while using fewer parameters, validating its automatic approach's efficacy.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142362680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Lightweight Method for Breast Cancer Detection Using Thermography Images with Optimized CNN Feature and Efficient Classification. 利用优化的 CNN 特征和高效的分类,使用热成像图像检测乳腺癌的轻量级方法。
Pub Date : 2024-10-02 DOI: 10.1007/s10278-024-01269-6
Thanh Nguyen Chi, Hong Le Thi Thu, Tu Doan Quang, David Taniar

Breast cancer is a prominent cause of death among women worldwide. Infrared thermography, due to its cost-effectiveness and non-ionizing radiation, has emerged as a promising tool for early breast cancer diagnosis. This article presents a hybrid model approach for breast cancer detection using thermography images, designed to process and classify these images into healthy or cancerous categories, thus supporting disease diagnosis. Multiple pre-trained convolutional neural networks are employed for image feature extraction, and feature filter methods are proposed for feature selection, with diverse classifiers utilized for image classification. Evaluating the DRM-IR test set revealed that the combination of ResNet34, Chi-square ( χ 2 ) filter, and SVM classifier demonstrated superior performance, achieving the highest accuracy at 99.62 % . Furthermore, the highest accuracy improvement obtained was 18.3 % when using the SVM classifier and Chi-square filter compared to regular convolutional neural networks. The results confirmed that the proposed method, with its high accuracy and lightweight model, outperforms state-of-the-art breast cancer detection from thermography image methods, making it a good choice for computer-aided diagnosis.

乳腺癌是全球妇女的主要死因。红外热成像技术因其成本效益和非电离辐射,已成为一种很有前途的早期乳腺癌诊断工具。本文介绍了一种利用热成像图像检测乳腺癌的混合模型方法,旨在处理这些图像并将其分为健康或癌症类别,从而为疾病诊断提供支持。在图像特征提取中采用了多个预训练卷积神经网络,在特征选择中提出了特征过滤器方法,在图像分类中采用了多种分类器。对 DRM-IR 测试集进行评估后发现,ResNet34、Chi-square ( χ 2 ) 过滤器和 SVM 分类器的组合表现出色,准确率最高,达到 99.62%。此外,与普通卷积神经网络相比,使用 SVM 分类器和 Chi-square 滤波器获得的最高准确率提高了 18.3%。结果证实,所提出的方法具有准确率高、模型轻便的特点,优于目前最先进的从热成像图像检测乳腺癌的方法,是计算机辅助诊断的良好选择。
{"title":"A Lightweight Method for Breast Cancer Detection Using Thermography Images with Optimized CNN Feature and Efficient Classification.","authors":"Thanh Nguyen Chi, Hong Le Thi Thu, Tu Doan Quang, David Taniar","doi":"10.1007/s10278-024-01269-6","DOIUrl":"https://doi.org/10.1007/s10278-024-01269-6","url":null,"abstract":"<p><p>Breast cancer is a prominent cause of death among women worldwide. Infrared thermography, due to its cost-effectiveness and non-ionizing radiation, has emerged as a promising tool for early breast cancer diagnosis. This article presents a hybrid model approach for breast cancer detection using thermography images, designed to process and classify these images into healthy or cancerous categories, thus supporting disease diagnosis. Multiple pre-trained convolutional neural networks are employed for image feature extraction, and feature filter methods are proposed for feature selection, with diverse classifiers utilized for image classification. Evaluating the DRM-IR test set revealed that the combination of ResNet34, Chi-square ( <math> <msup><mrow><mi>χ</mi></mrow> <mrow><mn>2</mn></mrow> </msup> </math> ) filter, and SVM classifier demonstrated superior performance, achieving the highest accuracy at <math><mrow><mn>99.62</mn> <mo>%</mo></mrow> </math> . Furthermore, the highest accuracy improvement obtained was <math><mrow><mn>18.3</mn> <mo>%</mo></mrow> </math> when using the SVM classifier and Chi-square filter compared to regular convolutional neural networks. The results confirmed that the proposed method, with its high accuracy and lightweight model, outperforms state-of-the-art breast cancer detection from thermography image methods, making it a good choice for computer-aided diagnosis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142362679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ocular Imaging Challenges, Current State, and a Path to Interoperability: A HIMSS-SIIM Enterprise Imaging Community Whitepaper. 眼科成像的挑战、现状和互操作性之路:HIMSS-SIIM 企业成像社区白皮书》。
Pub Date : 2024-10-01 DOI: 10.1007/s10278-024-01261-0
Kerry E Goetz, Michael V Boland, Zhongdi Chu, Amberlynn A Reed, Shawn D Clark, Alexander J Towbin, Boonkit Purt, Kevin O'Donnell, Marilyn M Bui, Monief Eid, Christopher J Roth, Damien M Luviano, Les R Folio

Office-based testing, enhanced by advances in imaging technology, is routinely used in eye care to non-invasively assess ocular structure and function. This type of imaging coupled with autonomous artificial intelligence holds immense opportunity to diagnose eye diseases quickly. Despite the wide availability and use of ocular imaging, there are several factors that hinder optimization of clinical practice and patient care. While some large institutions have developed end-to-end digital workflows that utilize electronic health records, enterprise imaging archives, and dedicated diagnostic viewers, this experience has not yet made its way to smaller and independent eye clinics. Fractured interoperability practices impact patient care in all healthcare domains, including eye care where there is a scarcity of care centers, making collaboration essential among providers, specialists, and primary care who might be treating systemic conditions with profound impact on vision. The purpose of this white paper is to describe the current state of ocular imaging by focusing on the challenges related to interoperability, reporting, and clinical workflow.

借助成像技术的进步,眼科常规使用诊室测试对眼部结构和功能进行无创评估。这种成像技术与自主人工智能相结合,为快速诊断眼部疾病带来了巨大的机遇。尽管眼科成像技术得到了广泛的普及和应用,但仍有一些因素阻碍了临床实践和患者护理的优化。虽然一些大型机构已经开发出利用电子健康记录、企业成像档案和专用诊断查看器的端到端数字工作流程,但这种经验尚未推广到规模较小的独立眼科诊所。断裂的互操作性实践影响着所有医疗保健领域的患者护理,包括护理中心稀缺的眼科护理,这使得医疗服务提供者、专科医生和初级保健医生之间的协作变得至关重要,因为他们可能正在治疗对视力有深远影响的系统性疾病。本白皮书旨在描述眼科成像的现状,重点关注与互操作性、报告和临床工作流程相关的挑战。
{"title":"Ocular Imaging Challenges, Current State, and a Path to Interoperability: A HIMSS-SIIM Enterprise Imaging Community Whitepaper.","authors":"Kerry E Goetz, Michael V Boland, Zhongdi Chu, Amberlynn A Reed, Shawn D Clark, Alexander J Towbin, Boonkit Purt, Kevin O'Donnell, Marilyn M Bui, Monief Eid, Christopher J Roth, Damien M Luviano, Les R Folio","doi":"10.1007/s10278-024-01261-0","DOIUrl":"https://doi.org/10.1007/s10278-024-01261-0","url":null,"abstract":"<p><p>Office-based testing, enhanced by advances in imaging technology, is routinely used in eye care to non-invasively assess ocular structure and function. This type of imaging coupled with autonomous artificial intelligence holds immense opportunity to diagnose eye diseases quickly. Despite the wide availability and use of ocular imaging, there are several factors that hinder optimization of clinical practice and patient care. While some large institutions have developed end-to-end digital workflows that utilize electronic health records, enterprise imaging archives, and dedicated diagnostic viewers, this experience has not yet made its way to smaller and independent eye clinics. Fractured interoperability practices impact patient care in all healthcare domains, including eye care where there is a scarcity of care centers, making collaboration essential among providers, specialists, and primary care who might be treating systemic conditions with profound impact on vision. The purpose of this white paper is to describe the current state of ocular imaging by focusing on the challenges related to interoperability, reporting, and clinical workflow.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MobileNet-V2: An Enhanced Skin Disease Classification by Attention and Multi-Scale Features. MobileNet-V2:通过注意力和多尺度特征增强皮肤病分类能力
Pub Date : 2024-10-01 DOI: 10.1007/s10278-024-01271-y
Nirupama, Virupakshappa

The increasing prevalence of skin diseases necessitates accurate and efficient diagnostic tools. This research introduces a novel skin disease classification model leveraging advanced deep learning techniques. The proposed architecture combines the MobileNet-V2 backbone, Squeeze-and-Excitation (SE) blocks, Atrous Spatial Pyramid Pooling (ASPP), and a Channel Attention Mechanism. The model was trained on four diverse datasets such as PH2 dataset, Skin Cancer MNIST: HAM10000 dataset, DermNet. dataset, and Skin Cancer ISIC dataset. Data preprocessing techniques, including image resizing, and normalization, played a crucial role in optimizing model performance. In this paper, the MobileNet-V2 backbone is implemented to extract hierarchical features from the preprocessed dermoscopic images. The multi-scale contextual information is fused by the ASPP model for generating a feature map. The attention mechanisms contributed significantly, enhancing the extraction ability of inter-channel relationships and multi-scale contextual information for enhancing the discriminative power of the features. Finally, the output feature map is converted into probability distribution through the softmax function. The proposed model outperformed several baseline models, including traditional machine learning approaches, emphasizing its superiority in skin disease classification with 98.6% overall accuracy. Its competitive performance with state-of-the-art methods positions it as a valuable tool for assisting dermatologists in early classification. The study also identified limitations and suggested avenues for future research, emphasizing the model's potential for practical implementation in the field of dermatology.

皮肤病的发病率越来越高,需要准确高效的诊断工具。本研究利用先进的深度学习技术引入了一种新型皮肤病分类模型。所提出的架构结合了 MobileNet-V2 主干网、挤压-激发(SE)区块、Atrous 空间金字塔池化(ASPP)和通道注意机制。该模型在四个不同的数据集上进行了训练,如 PH2 数据集、皮肤癌 MNIST 数据集和 HAM10000 数据集:数据集和皮肤癌 ISIC 数据集。数据预处理技术,包括图像大小调整和归一化,在优化模型性能方面发挥了至关重要的作用。本文利用 MobileNet-V2 骨干网从预处理后的皮肤镜图像中提取分层特征。多尺度上下文信息由 ASPP 模型融合生成特征图。注意力机制的贡献很大,它增强了对通道间关系和多尺度上下文信息的提取能力,从而提高了特征的判别能力。最后,通过 softmax 函数将输出特征图转换为概率分布。所提出的模型优于包括传统机器学习方法在内的几种基线模型,在皮肤病分类方面表现突出,总体准确率达 98.6%。与最先进的方法相比,该模型的性能更具竞争力,是协助皮肤科医生进行早期分类的重要工具。研究还指出了其局限性,并提出了未来的研究方向,强调了该模型在皮肤病学领域的实际应用潜力。
{"title":"MobileNet-V2: An Enhanced Skin Disease Classification by Attention and Multi-Scale Features.","authors":"Nirupama, Virupakshappa","doi":"10.1007/s10278-024-01271-y","DOIUrl":"https://doi.org/10.1007/s10278-024-01271-y","url":null,"abstract":"<p><p>The increasing prevalence of skin diseases necessitates accurate and efficient diagnostic tools. This research introduces a novel skin disease classification model leveraging advanced deep learning techniques. The proposed architecture combines the MobileNet-V2 backbone, Squeeze-and-Excitation (SE) blocks, Atrous Spatial Pyramid Pooling (ASPP), and a Channel Attention Mechanism. The model was trained on four diverse datasets such as PH2 dataset, Skin Cancer MNIST: HAM10000 dataset, DermNet. dataset, and Skin Cancer ISIC dataset. Data preprocessing techniques, including image resizing, and normalization, played a crucial role in optimizing model performance. In this paper, the MobileNet-V2 backbone is implemented to extract hierarchical features from the preprocessed dermoscopic images. The multi-scale contextual information is fused by the ASPP model for generating a feature map. The attention mechanisms contributed significantly, enhancing the extraction ability of inter-channel relationships and multi-scale contextual information for enhancing the discriminative power of the features. Finally, the output feature map is converted into probability distribution through the softmax function. The proposed model outperformed several baseline models, including traditional machine learning approaches, emphasizing its superiority in skin disease classification with 98.6% overall accuracy. Its competitive performance with state-of-the-art methods positions it as a valuable tool for assisting dermatologists in early classification. The study also identified limitations and suggested avenues for future research, emphasizing the model's potential for practical implementation in the field of dermatology.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Web-Based DICOM Viewers: A Survey and a Performance Classification. 基于网络的 DICOM 浏览器:调查与性能分类
Pub Date : 2024-09-30 DOI: 10.1007/s10278-024-01216-5
Hugo Pereira, Luis Romero, Pedro Miguel Faria

The standard for managing image data in healthcare is the DICOM (Digital Imaging and Communications in Medicine) protocol. DICOM web-viewers provide flexible and accessible platforms for their users to view and analyze DICOM images remotely. This article presents a comprehensive evaluation of various web-based DICOM viewers, emphasizing their performance in different rendering scenarios, browsers, and operating systems. The study includes a total of 16 web-based viewers, of which 12 were surveyed, and 7 were compared performance-wise based on the availability of an online demo. The criteria for examination include accessibility features, such as available information or requirements for usage, interface features, such as loading capabilities or cloud storage, two-dimensional (2D) viewing features, such as the ability to perform measurements or alter the viewing window, and three-dimensional (3D) viewing features, such as volume rendering or secondary reconstruction. Only 4 of the viewers allow for the viewing of local DICOM files in 3D (other than MPR(Multiplanar reconstruction)). Premium software offers a large amount of features with overall good performance. One of the free alternatives demonstrated the best efficiency in both 2D and 3D rendering but faces challenges with missing 3D rendering features in its interface, which is still in development. Other free options exhibited slower performance, especially in 2D rendering but have more ready-to-use features on their web app. The evaluation also underscores the importance of browser choice, with some browsers performing much better than the competition, and highlights the significance of hardware when dealing with rendering tasks.

医疗保健领域管理图像数据的标准是 DICOM(医学数字成像和通信)协议。DICOM 网络查看器为用户远程查看和分析 DICOM 图像提供了灵活、可访问的平台。本文对各种基于网络的 DICOM 查看器进行了全面评估,强调了它们在不同渲染场景、浏览器和操作系统下的性能。研究共包括 16 个基于网络的查看器,其中 12 个接受了调查,7 个根据在线演示的可用性进行了性能比较。考察标准包括可访问性特征(如可用信息或使用要求)、界面特征(如加载能力或云存储)、二维(2D)查看特征(如执行测量或更改查看窗口的能力)以及三维(3D)查看特征(如体积渲染或二次重建)。只有 4 种查看器允许以三维方式查看本地 DICOM 文件(MPR(多平面重建)除外)。高级软件提供大量功能,总体性能良好。其中一款免费软件在二维和三维渲染方面都表现出了最佳效率,但由于其界面中缺少三维渲染功能而面临挑战,该软件仍在开发中。其他免费选项的性能较慢,尤其是在二维渲染方面,但其网络应用程序具有更多即用功能。这次评估还强调了浏览器选择的重要性,有些浏览器的性能比竞争对手要好得多,同时也突出了硬件在处理渲染任务时的重要性。
{"title":"Web-Based DICOM Viewers: A Survey and a Performance Classification.","authors":"Hugo Pereira, Luis Romero, Pedro Miguel Faria","doi":"10.1007/s10278-024-01216-5","DOIUrl":"https://doi.org/10.1007/s10278-024-01216-5","url":null,"abstract":"<p><p>The standard for managing image data in healthcare is the DICOM (Digital Imaging and Communications in Medicine) protocol. DICOM web-viewers provide flexible and accessible platforms for their users to view and analyze DICOM images remotely. This article presents a comprehensive evaluation of various web-based DICOM viewers, emphasizing their performance in different rendering scenarios, browsers, and operating systems. The study includes a total of 16 web-based viewers, of which 12 were surveyed, and 7 were compared performance-wise based on the availability of an online demo. The criteria for examination include accessibility features, such as available information or requirements for usage, interface features, such as loading capabilities or cloud storage, two-dimensional (2D) viewing features, such as the ability to perform measurements or alter the viewing window, and three-dimensional (3D) viewing features, such as volume rendering or secondary reconstruction. Only 4 of the viewers allow for the viewing of local DICOM files in 3D (other than MPR(Multiplanar reconstruction)). Premium software offers a large amount of features with overall good performance. One of the free alternatives demonstrated the best efficiency in both 2D and 3D rendering but faces challenges with missing 3D rendering features in its interface, which is still in development. Other free options exhibited slower performance, especially in 2D rendering but have more ready-to-use features on their web app. The evaluation also underscores the importance of browser choice, with some browsers performing much better than the competition, and highlights the significance of hardware when dealing with rendering tasks.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Approaches for Brain Tumor Detection and Classification Using MRI Images (2020 to 2024): A Systematic Review. 利用磁共振成像进行脑肿瘤检测和分类的深度学习方法(2020 年至 2024 年):系统综述。
Pub Date : 2024-09-30 DOI: 10.1007/s10278-024-01283-8
Sara Bouhafra, Hassan El Bahi

Brain tumor is a type of disease caused by uncontrolled cell proliferation in the brain leading to serious health issues such as memory loss and motor impairment. Therefore, early diagnosis of brain tumors plays a crucial role to extend the survival of patients. However, given the busy nature of the work of radiologists and aiming to reduce the likelihood of false diagnoses, advancing technologies including computer-aided diagnosis and artificial intelligence have shown an important role in assisting radiologists. In recent years, a number of deep learning-based methods have been applied for brain tumor detection and classification using MRI images and achieved promising results. The main objective of this paper is to present a detailed review of the previous researches in this field. In addition, This work summarizes the existing limitations and significant highlights. The study systematically reviews 60 articles researches published between 2020 and January 2024, extensively covering methods such as transfer learning, autoencoders, transformers, and attention mechanisms. The key findings formulated in this paper provide an analytic comparison and future directions. The review aims to provide a comprehensive understanding of automatic techniques that may be useful for professionals and academic communities working on brain tumor classification and detection.

脑肿瘤是一种因脑内细胞不受控制地增殖而导致严重健康问题(如记忆力减退和运动障碍)的疾病。因此,脑肿瘤的早期诊断对延长患者的生存期起着至关重要的作用。然而,由于放射科医生工作繁忙,为了减少误诊的可能性,包括计算机辅助诊断和人工智能在内的先进技术在协助放射科医生方面发挥了重要作用。近年来,一些基于深度学习的方法被应用于磁共振成像图像的脑肿瘤检测和分类,并取得了可喜的成果。本文的主要目的是对该领域以往的研究进行详细回顾。此外,这项工作还总结了现有的局限性和重要亮点。本研究系统回顾了 2020 年至 2024 年 1 月间发表的 60 篇研究文章,广泛涵盖了迁移学习、自动编码器、变换器和注意机制等方法。本文得出的主要结论提供了分析比较和未来方向。该综述旨在提供对自动技术的全面了解,这可能对从事脑肿瘤分类和检测的专业人员和学术界有用。
{"title":"Deep Learning Approaches for Brain Tumor Detection and Classification Using MRI Images (2020 to 2024): A Systematic Review.","authors":"Sara Bouhafra, Hassan El Bahi","doi":"10.1007/s10278-024-01283-8","DOIUrl":"https://doi.org/10.1007/s10278-024-01283-8","url":null,"abstract":"<p><p>Brain tumor is a type of disease caused by uncontrolled cell proliferation in the brain leading to serious health issues such as memory loss and motor impairment. Therefore, early diagnosis of brain tumors plays a crucial role to extend the survival of patients. However, given the busy nature of the work of radiologists and aiming to reduce the likelihood of false diagnoses, advancing technologies including computer-aided diagnosis and artificial intelligence have shown an important role in assisting radiologists. In recent years, a number of deep learning-based methods have been applied for brain tumor detection and classification using MRI images and achieved promising results. The main objective of this paper is to present a detailed review of the previous researches in this field. In addition, This work summarizes the existing limitations and significant highlights. The study systematically reviews 60 articles researches published between 2020 and January 2024, extensively covering methods such as transfer learning, autoencoders, transformers, and attention mechanisms. The key findings formulated in this paper provide an analytic comparison and future directions. The review aims to provide a comprehensive understanding of automatic techniques that may be useful for professionals and academic communities working on brain tumor classification and detection.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of imaging informatics in medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1