Pub Date : 2024-10-09DOI: 10.1007/s10278-024-01294-5
Peter Kamel, Mazhar Khalid, Rachel Steger, Adway Kanhere, Pranav Kulkarni, Vishwa Parekh, Paul H Yi, Dheeraj Gandhi, Uttam Bodanapally
Ischemic changes are not visible on non-contrast head CT until several hours after infarction, though deep convolutional neural networks have shown promise in the detection of subtle imaging findings. This study aims to assess if dual-energy CT (DECT) acquisition can improve early infarct visibility for machine learning. The retrospective dataset consisted of 330 DECTs acquired up to 48 h prior to confirmation of a DWI positive infarct on MRI between 2016 and 2022. Infarct segmentation maps were generated from the MRI and co-registered to the CT to serve as ground truth for segmentation. A self-configuring 3D nnU-Net was trained for segmentation on (1) standard 120 kV mixed-images (2) 190 keV virtual monochromatic images and (3) 120 kV + 190 keV images as dual channel inputs. Algorithm performance was assessed with Dice scores with paired t-tests on a test set. Global aggregate Dice scores were 0.616, 0.645, and 0.665 for standard 120 kV images, 190 keV, and combined channel inputs respectively. Differences in overall Dice scores were statistically significant with highest performance for combined channel inputs (p < 0.01). Small but statistically significant differences were observed for infarcts between 6 and 12 h from last-known-well with higher performance for larger infarcts. Volumetric accuracy trended higher with combined inputs but differences were not statistically significant (p = 0.07). Supplementation of standard head CT images with dual-energy data provides earlier and more accurate segmentation of infarcts for machine learning particularly between 6 and 12 h after last-known-well.
{"title":"Dual Energy CT for Deep Learning-Based Segmentation and Volumetric Estimation of Early Ischemic Infarcts.","authors":"Peter Kamel, Mazhar Khalid, Rachel Steger, Adway Kanhere, Pranav Kulkarni, Vishwa Parekh, Paul H Yi, Dheeraj Gandhi, Uttam Bodanapally","doi":"10.1007/s10278-024-01294-5","DOIUrl":"https://doi.org/10.1007/s10278-024-01294-5","url":null,"abstract":"<p><p>Ischemic changes are not visible on non-contrast head CT until several hours after infarction, though deep convolutional neural networks have shown promise in the detection of subtle imaging findings. This study aims to assess if dual-energy CT (DECT) acquisition can improve early infarct visibility for machine learning. The retrospective dataset consisted of 330 DECTs acquired up to 48 h prior to confirmation of a DWI positive infarct on MRI between 2016 and 2022. Infarct segmentation maps were generated from the MRI and co-registered to the CT to serve as ground truth for segmentation. A self-configuring 3D nnU-Net was trained for segmentation on (1) standard 120 kV mixed-images (2) 190 keV virtual monochromatic images and (3) 120 kV + 190 keV images as dual channel inputs. Algorithm performance was assessed with Dice scores with paired t-tests on a test set. Global aggregate Dice scores were 0.616, 0.645, and 0.665 for standard 120 kV images, 190 keV, and combined channel inputs respectively. Differences in overall Dice scores were statistically significant with highest performance for combined channel inputs (p < 0.01). Small but statistically significant differences were observed for infarcts between 6 and 12 h from last-known-well with higher performance for larger infarcts. Volumetric accuracy trended higher with combined inputs but differences were not statistically significant (p = 0.07). Supplementation of standard head CT images with dual-energy data provides earlier and more accurate segmentation of infarcts for machine learning particularly between 6 and 12 h after last-known-well.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142396609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-09DOI: 10.1007/s10278-024-01285-6
Mana Moassefi, Nikki Fennell, Mindy Yang, Jennifer B Gunter, Teri M Sippel Schmit, Tessa S Cook
For the past 6 years, the Society for Imaging Informatics in Medicine (SIIM) annual meeting has provided a forum for women in imaging informatics to discuss the unique challenges they face. These sessions have evolved into a platform for understanding, sharing experiences, and developing practical strategies. The 2023 session was organized into three focus groups devoted to discussing imposter syndrome, workplace microaggressions, and work-life balance. This paper summarizes these discussions and highlights the significant themes and narratives that emerged. We aim to contribute to the larger conversation on gender equity in the informatics field, emphasizing the importance of understanding and addressing the challenges faced by women in informatics. By documenting these sessions, we seek to inspire actionable change towards a more inclusive and equitable future for everyone in imaging informatics.
{"title":"Empowering Women in Imaging Informatics: Confronting Imposter Syndrome, Addressing Microaggressions, and Striving for Work-Life Harmony.","authors":"Mana Moassefi, Nikki Fennell, Mindy Yang, Jennifer B Gunter, Teri M Sippel Schmit, Tessa S Cook","doi":"10.1007/s10278-024-01285-6","DOIUrl":"https://doi.org/10.1007/s10278-024-01285-6","url":null,"abstract":"<p><p>For the past 6 years, the Society for Imaging Informatics in Medicine (SIIM) annual meeting has provided a forum for women in imaging informatics to discuss the unique challenges they face. These sessions have evolved into a platform for understanding, sharing experiences, and developing practical strategies. The 2023 session was organized into three focus groups devoted to discussing imposter syndrome, workplace microaggressions, and work-life balance. This paper summarizes these discussions and highlights the significant themes and narratives that emerged. We aim to contribute to the larger conversation on gender equity in the informatics field, emphasizing the importance of understanding and addressing the challenges faced by women in informatics. By documenting these sessions, we seek to inspire actionable change towards a more inclusive and equitable future for everyone in imaging informatics.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142396610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-07DOI: 10.1007/s10278-024-01286-5
Amir M Vahdani, Shahriar Faghani
Trustworthiness is crucial for artificial intelligence (AI) models in clinical settings, and a fundamental aspect of trustworthy AI is uncertainty quantification (UQ). Conformal prediction as a robust uncertainty quantification (UQ) framework has been receiving increasing attention as a valuable tool in improving model trustworthiness. An area of active research is the method of non-conformity score calculation for conformal prediction. We propose deep conformal supervision (DCS), which leverages the intermediate outputs of deep supervision for non-conformity score calculation, via weighted averaging based on the inverse of mean calibration error for each stage. We benchmarked our method on two publicly available datasets focused on medical image classification: a pneumonia chest radiography dataset and a preprocessed version of the 2019 RSNA Intracranial Hemorrhage dataset. Our method achieved mean coverage errors of 16e-4 (CI: 1e-4, 41e-4) and 5e-4 (CI: 1e-4, 10e-4) compared to baseline mean coverage errors of 28e-4 (CI: 2e-4, 64e-4) and 21e-4 (CI: 8e-4, 3e-4) on the two datasets, respectively (p < 0.001 on both datasets). Based on our findings, the baseline results of conformal prediction already exhibit small coverage errors. However, our method shows a significant improvement on coverage error, particularly noticeable in scenarios involving smaller datasets or when considering smaller acceptable error levels, which are crucial in developing UQ frameworks for healthcare AI applications.
{"title":"Deep Conformal Supervision: Leveraging Intermediate Features for Robust Uncertainty Quantification.","authors":"Amir M Vahdani, Shahriar Faghani","doi":"10.1007/s10278-024-01286-5","DOIUrl":"https://doi.org/10.1007/s10278-024-01286-5","url":null,"abstract":"<p><p>Trustworthiness is crucial for artificial intelligence (AI) models in clinical settings, and a fundamental aspect of trustworthy AI is uncertainty quantification (UQ). Conformal prediction as a robust uncertainty quantification (UQ) framework has been receiving increasing attention as a valuable tool in improving model trustworthiness. An area of active research is the method of non-conformity score calculation for conformal prediction. We propose deep conformal supervision (DCS), which leverages the intermediate outputs of deep supervision for non-conformity score calculation, via weighted averaging based on the inverse of mean calibration error for each stage. We benchmarked our method on two publicly available datasets focused on medical image classification: a pneumonia chest radiography dataset and a preprocessed version of the 2019 RSNA Intracranial Hemorrhage dataset. Our method achieved mean coverage errors of 16e-4 (CI: 1e-4, 41e-4) and 5e-4 (CI: 1e-4, 10e-4) compared to baseline mean coverage errors of 28e-4 (CI: 2e-4, 64e-4) and 21e-4 (CI: 8e-4, 3e-4) on the two datasets, respectively (p < 0.001 on both datasets). Based on our findings, the baseline results of conformal prediction already exhibit small coverage errors. However, our method shows a significant improvement on coverage error, particularly noticeable in scenarios involving smaller datasets or when considering smaller acceptable error levels, which are crucial in developing UQ frameworks for healthcare AI applications.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142396608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-04DOI: 10.1007/s10278-024-01280-x
Mazen M Yassin, Asim Zaman, Jiaxi Lu, Huihui Yang, Anbo Cao, Haseeb Hassan, Taiyu Han, Xiaoqiang Miao, Yongkang Shi, Yingwei Guo, Yu Luo, Yan Kang
Predicting long-term clinical outcomes based on the early DSC PWI MRI scan is valuable for prognostication, resource management, clinical trials, and patient expectations. Current methods require subjective decisions about which imaging features to assess and may require time-consuming postprocessing. This study's goal was to predict multilabel 90-day modified Rankin Scale (mRS) score in acute ischemic stroke patients by combining ensemble models and different configurations of radiomic features generated from Dynamic susceptibility contrast perfusion-weighted imaging. In Follow-up studies, a total of 70 acute ischemic stroke (AIS) patients underwent magnetic resonance imaging within 24 hours poststroke and had a follow-up scan. In the single study, 150 DSC PWI Image scans for AIS patients. The DRF are extracted from DSC-PWI Scans. Then Lasso algorithm is applied for feature selection, then new features are generated from initial and follow-up scans. Then we applied different ensemble models to classify between three classes normal outcome (0, 1 mRS score), moderate outcome (2,3,4 mRS score), and severe outcome (5,6 mRS score). ANOVA and post-hoc Tukey HSD tests confirmed significant differences in model style performance across various studies and classification techniques. Stacking models consistently on average outperformed others, achieving an Accuracy of 0.68 ± 0.15, Precision of 0.68 ± 0.17, Recall of 0.65 ± 0.14, and F1 score of 0.63 ± 0.15 in the follow-up time study. Techniques like Bo_Smote showed significantly higher recall and F1 scores, highlighting their robustness and effectiveness in handling imbalanced data. Ensemble models, particularly Bagging and Stacking, demonstrated superior performance, achieving nearly 0.93 in Accuracy, 0.95 in Precision, 0.94 in Recall, and 0.94 in F1 metrics in follow-up conditions, significantly outperforming single models. Ensemble models based on radiomics generated from combining Initial and follow-up scans can be used to predict multilabel 90-day stroke outcomes with reduced subjectivity and user burden.
{"title":"Leveraging Ensemble Models and Follow-up Data for Accurate Prediction of mRS Scores from Radiomic Features of DSC-PWI Images.","authors":"Mazen M Yassin, Asim Zaman, Jiaxi Lu, Huihui Yang, Anbo Cao, Haseeb Hassan, Taiyu Han, Xiaoqiang Miao, Yongkang Shi, Yingwei Guo, Yu Luo, Yan Kang","doi":"10.1007/s10278-024-01280-x","DOIUrl":"https://doi.org/10.1007/s10278-024-01280-x","url":null,"abstract":"<p><p>Predicting long-term clinical outcomes based on the early DSC PWI MRI scan is valuable for prognostication, resource management, clinical trials, and patient expectations. Current methods require subjective decisions about which imaging features to assess and may require time-consuming postprocessing. This study's goal was to predict multilabel 90-day modified Rankin Scale (mRS) score in acute ischemic stroke patients by combining ensemble models and different configurations of radiomic features generated from Dynamic susceptibility contrast perfusion-weighted imaging. In Follow-up studies, a total of 70 acute ischemic stroke (AIS) patients underwent magnetic resonance imaging within 24 hours poststroke and had a follow-up scan. In the single study, 150 DSC PWI Image scans for AIS patients. The DRF are extracted from DSC-PWI Scans. Then Lasso algorithm is applied for feature selection, then new features are generated from initial and follow-up scans. Then we applied different ensemble models to classify between three classes normal outcome (0, 1 mRS score), moderate outcome (2,3,4 mRS score), and severe outcome (5,6 mRS score). ANOVA and post-hoc Tukey HSD tests confirmed significant differences in model style performance across various studies and classification techniques. Stacking models consistently on average outperformed others, achieving an Accuracy of 0.68 ± 0.15, Precision of 0.68 ± 0.17, Recall of 0.65 ± 0.14, and F1 score of 0.63 ± 0.15 in the follow-up time study. Techniques like Bo_Smote showed significantly higher recall and F1 scores, highlighting their robustness and effectiveness in handling imbalanced data. Ensemble models, particularly Bagging and Stacking, demonstrated superior performance, achieving nearly 0.93 in Accuracy, 0.95 in Precision, 0.94 in Recall, and 0.94 in F1 metrics in follow-up conditions, significantly outperforming single models. Ensemble models based on radiomics generated from combining Initial and follow-up scans can be used to predict multilabel 90-day stroke outcomes with reduced subjectivity and user burden.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142376505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-02DOI: 10.1007/s10278-024-01275-8
Filippo Bargagna, Donato Zigrino, Lisa Anita De Santi, Dario Genovesi, Michele Scipioni, Brunella Favilli, Giuseppe Vergaro, Michele Emdin, Assuero Giorgetti, Vincenzo Positano, Maria Filomena Santarelli
Medical image classification using convolutional neural networks (CNNs) is promising but often requires extensive manual tuning for optimal model definition. Neural architecture search (NAS) automates this process, reducing human intervention significantly. This study applies NAS to [18F]-Florbetaben PET cardiac images for classifying cardiac amyloidosis (CA) sub-types (amyloid light chain (AL) and transthyretin amyloid (ATTR)) and controls. Following data preprocessing and augmentation, an evolutionary cell-based NAS approach with a fixed network macro-structure is employed, automatically deriving cells' micro-structure. The algorithm is executed five times, evaluating 100 mutating architectures per run on an augmented dataset of 4048 images (originally 597), totaling 5000 architectures evaluated. The best network (NAS-Net) achieves 76.95% overall accuracy. K-fold analysis yields mean ± SD percentages of sensitivity, specificity, and accuracy on the test dataset: AL subjects (98.7 ± 2.9, 99.3 ± 1.1, 99.7 ± 0.7), ATTR-CA subjects (93.3 ± 7.8, 78.0 ± 2.9, 70.9 ± 3.7), and controls (35.8 ± 14.6, 77.1 ± 2.0, 96.7 ± 4.4). NAS-derived network performance rivals manually determined networks in the literature while using fewer parameters, validating its automatic approach's efficacy.
{"title":"Automated Neural Architecture Search for Cardiac Amyloidosis Classification from [18F]-Florbetaben PET Images.","authors":"Filippo Bargagna, Donato Zigrino, Lisa Anita De Santi, Dario Genovesi, Michele Scipioni, Brunella Favilli, Giuseppe Vergaro, Michele Emdin, Assuero Giorgetti, Vincenzo Positano, Maria Filomena Santarelli","doi":"10.1007/s10278-024-01275-8","DOIUrl":"https://doi.org/10.1007/s10278-024-01275-8","url":null,"abstract":"<p><p>Medical image classification using convolutional neural networks (CNNs) is promising but often requires extensive manual tuning for optimal model definition. Neural architecture search (NAS) automates this process, reducing human intervention significantly. This study applies NAS to [18F]-Florbetaben PET cardiac images for classifying cardiac amyloidosis (CA) sub-types (amyloid light chain (AL) and transthyretin amyloid (ATTR)) and controls. Following data preprocessing and augmentation, an evolutionary cell-based NAS approach with a fixed network macro-structure is employed, automatically deriving cells' micro-structure. The algorithm is executed five times, evaluating 100 mutating architectures per run on an augmented dataset of 4048 images (originally 597), totaling 5000 architectures evaluated. The best network (NAS-Net) achieves 76.95% overall accuracy. K-fold analysis yields mean ± SD percentages of sensitivity, specificity, and accuracy on the test dataset: AL subjects (98.7 ± 2.9, 99.3 ± 1.1, 99.7 ± 0.7), ATTR-CA subjects (93.3 ± 7.8, 78.0 ± 2.9, 70.9 ± 3.7), and controls (35.8 ± 14.6, 77.1 ± 2.0, 96.7 ± 4.4). NAS-derived network performance rivals manually determined networks in the literature while using fewer parameters, validating its automatic approach's efficacy.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142362680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-02DOI: 10.1007/s10278-024-01269-6
Thanh Nguyen Chi, Hong Le Thi Thu, Tu Doan Quang, David Taniar
Breast cancer is a prominent cause of death among women worldwide. Infrared thermography, due to its cost-effectiveness and non-ionizing radiation, has emerged as a promising tool for early breast cancer diagnosis. This article presents a hybrid model approach for breast cancer detection using thermography images, designed to process and classify these images into healthy or cancerous categories, thus supporting disease diagnosis. Multiple pre-trained convolutional neural networks are employed for image feature extraction, and feature filter methods are proposed for feature selection, with diverse classifiers utilized for image classification. Evaluating the DRM-IR test set revealed that the combination of ResNet34, Chi-square ( ) filter, and SVM classifier demonstrated superior performance, achieving the highest accuracy at . Furthermore, the highest accuracy improvement obtained was when using the SVM classifier and Chi-square filter compared to regular convolutional neural networks. The results confirmed that the proposed method, with its high accuracy and lightweight model, outperforms state-of-the-art breast cancer detection from thermography image methods, making it a good choice for computer-aided diagnosis.
{"title":"A Lightweight Method for Breast Cancer Detection Using Thermography Images with Optimized CNN Feature and Efficient Classification.","authors":"Thanh Nguyen Chi, Hong Le Thi Thu, Tu Doan Quang, David Taniar","doi":"10.1007/s10278-024-01269-6","DOIUrl":"https://doi.org/10.1007/s10278-024-01269-6","url":null,"abstract":"<p><p>Breast cancer is a prominent cause of death among women worldwide. Infrared thermography, due to its cost-effectiveness and non-ionizing radiation, has emerged as a promising tool for early breast cancer diagnosis. This article presents a hybrid model approach for breast cancer detection using thermography images, designed to process and classify these images into healthy or cancerous categories, thus supporting disease diagnosis. Multiple pre-trained convolutional neural networks are employed for image feature extraction, and feature filter methods are proposed for feature selection, with diverse classifiers utilized for image classification. Evaluating the DRM-IR test set revealed that the combination of ResNet34, Chi-square ( <math> <msup><mrow><mi>χ</mi></mrow> <mrow><mn>2</mn></mrow> </msup> </math> ) filter, and SVM classifier demonstrated superior performance, achieving the highest accuracy at <math><mrow><mn>99.62</mn> <mo>%</mo></mrow> </math> . Furthermore, the highest accuracy improvement obtained was <math><mrow><mn>18.3</mn> <mo>%</mo></mrow> </math> when using the SVM classifier and Chi-square filter compared to regular convolutional neural networks. The results confirmed that the proposed method, with its high accuracy and lightweight model, outperforms state-of-the-art breast cancer detection from thermography image methods, making it a good choice for computer-aided diagnosis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142362679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1007/s10278-024-01261-0
Kerry E Goetz, Michael V Boland, Zhongdi Chu, Amberlynn A Reed, Shawn D Clark, Alexander J Towbin, Boonkit Purt, Kevin O'Donnell, Marilyn M Bui, Monief Eid, Christopher J Roth, Damien M Luviano, Les R Folio
Office-based testing, enhanced by advances in imaging technology, is routinely used in eye care to non-invasively assess ocular structure and function. This type of imaging coupled with autonomous artificial intelligence holds immense opportunity to diagnose eye diseases quickly. Despite the wide availability and use of ocular imaging, there are several factors that hinder optimization of clinical practice and patient care. While some large institutions have developed end-to-end digital workflows that utilize electronic health records, enterprise imaging archives, and dedicated diagnostic viewers, this experience has not yet made its way to smaller and independent eye clinics. Fractured interoperability practices impact patient care in all healthcare domains, including eye care where there is a scarcity of care centers, making collaboration essential among providers, specialists, and primary care who might be treating systemic conditions with profound impact on vision. The purpose of this white paper is to describe the current state of ocular imaging by focusing on the challenges related to interoperability, reporting, and clinical workflow.
{"title":"Ocular Imaging Challenges, Current State, and a Path to Interoperability: A HIMSS-SIIM Enterprise Imaging Community Whitepaper.","authors":"Kerry E Goetz, Michael V Boland, Zhongdi Chu, Amberlynn A Reed, Shawn D Clark, Alexander J Towbin, Boonkit Purt, Kevin O'Donnell, Marilyn M Bui, Monief Eid, Christopher J Roth, Damien M Luviano, Les R Folio","doi":"10.1007/s10278-024-01261-0","DOIUrl":"https://doi.org/10.1007/s10278-024-01261-0","url":null,"abstract":"<p><p>Office-based testing, enhanced by advances in imaging technology, is routinely used in eye care to non-invasively assess ocular structure and function. This type of imaging coupled with autonomous artificial intelligence holds immense opportunity to diagnose eye diseases quickly. Despite the wide availability and use of ocular imaging, there are several factors that hinder optimization of clinical practice and patient care. While some large institutions have developed end-to-end digital workflows that utilize electronic health records, enterprise imaging archives, and dedicated diagnostic viewers, this experience has not yet made its way to smaller and independent eye clinics. Fractured interoperability practices impact patient care in all healthcare domains, including eye care where there is a scarcity of care centers, making collaboration essential among providers, specialists, and primary care who might be treating systemic conditions with profound impact on vision. The purpose of this white paper is to describe the current state of ocular imaging by focusing on the challenges related to interoperability, reporting, and clinical workflow.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1007/s10278-024-01271-y
Nirupama, Virupakshappa
The increasing prevalence of skin diseases necessitates accurate and efficient diagnostic tools. This research introduces a novel skin disease classification model leveraging advanced deep learning techniques. The proposed architecture combines the MobileNet-V2 backbone, Squeeze-and-Excitation (SE) blocks, Atrous Spatial Pyramid Pooling (ASPP), and a Channel Attention Mechanism. The model was trained on four diverse datasets such as PH2 dataset, Skin Cancer MNIST: HAM10000 dataset, DermNet. dataset, and Skin Cancer ISIC dataset. Data preprocessing techniques, including image resizing, and normalization, played a crucial role in optimizing model performance. In this paper, the MobileNet-V2 backbone is implemented to extract hierarchical features from the preprocessed dermoscopic images. The multi-scale contextual information is fused by the ASPP model for generating a feature map. The attention mechanisms contributed significantly, enhancing the extraction ability of inter-channel relationships and multi-scale contextual information for enhancing the discriminative power of the features. Finally, the output feature map is converted into probability distribution through the softmax function. The proposed model outperformed several baseline models, including traditional machine learning approaches, emphasizing its superiority in skin disease classification with 98.6% overall accuracy. Its competitive performance with state-of-the-art methods positions it as a valuable tool for assisting dermatologists in early classification. The study also identified limitations and suggested avenues for future research, emphasizing the model's potential for practical implementation in the field of dermatology.
{"title":"MobileNet-V2: An Enhanced Skin Disease Classification by Attention and Multi-Scale Features.","authors":"Nirupama, Virupakshappa","doi":"10.1007/s10278-024-01271-y","DOIUrl":"https://doi.org/10.1007/s10278-024-01271-y","url":null,"abstract":"<p><p>The increasing prevalence of skin diseases necessitates accurate and efficient diagnostic tools. This research introduces a novel skin disease classification model leveraging advanced deep learning techniques. The proposed architecture combines the MobileNet-V2 backbone, Squeeze-and-Excitation (SE) blocks, Atrous Spatial Pyramid Pooling (ASPP), and a Channel Attention Mechanism. The model was trained on four diverse datasets such as PH2 dataset, Skin Cancer MNIST: HAM10000 dataset, DermNet. dataset, and Skin Cancer ISIC dataset. Data preprocessing techniques, including image resizing, and normalization, played a crucial role in optimizing model performance. In this paper, the MobileNet-V2 backbone is implemented to extract hierarchical features from the preprocessed dermoscopic images. The multi-scale contextual information is fused by the ASPP model for generating a feature map. The attention mechanisms contributed significantly, enhancing the extraction ability of inter-channel relationships and multi-scale contextual information for enhancing the discriminative power of the features. Finally, the output feature map is converted into probability distribution through the softmax function. The proposed model outperformed several baseline models, including traditional machine learning approaches, emphasizing its superiority in skin disease classification with 98.6% overall accuracy. Its competitive performance with state-of-the-art methods positions it as a valuable tool for assisting dermatologists in early classification. The study also identified limitations and suggested avenues for future research, emphasizing the model's potential for practical implementation in the field of dermatology.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-30DOI: 10.1007/s10278-024-01216-5
Hugo Pereira, Luis Romero, Pedro Miguel Faria
The standard for managing image data in healthcare is the DICOM (Digital Imaging and Communications in Medicine) protocol. DICOM web-viewers provide flexible and accessible platforms for their users to view and analyze DICOM images remotely. This article presents a comprehensive evaluation of various web-based DICOM viewers, emphasizing their performance in different rendering scenarios, browsers, and operating systems. The study includes a total of 16 web-based viewers, of which 12 were surveyed, and 7 were compared performance-wise based on the availability of an online demo. The criteria for examination include accessibility features, such as available information or requirements for usage, interface features, such as loading capabilities or cloud storage, two-dimensional (2D) viewing features, such as the ability to perform measurements or alter the viewing window, and three-dimensional (3D) viewing features, such as volume rendering or secondary reconstruction. Only 4 of the viewers allow for the viewing of local DICOM files in 3D (other than MPR(Multiplanar reconstruction)). Premium software offers a large amount of features with overall good performance. One of the free alternatives demonstrated the best efficiency in both 2D and 3D rendering but faces challenges with missing 3D rendering features in its interface, which is still in development. Other free options exhibited slower performance, especially in 2D rendering but have more ready-to-use features on their web app. The evaluation also underscores the importance of browser choice, with some browsers performing much better than the competition, and highlights the significance of hardware when dealing with rendering tasks.
{"title":"Web-Based DICOM Viewers: A Survey and a Performance Classification.","authors":"Hugo Pereira, Luis Romero, Pedro Miguel Faria","doi":"10.1007/s10278-024-01216-5","DOIUrl":"https://doi.org/10.1007/s10278-024-01216-5","url":null,"abstract":"<p><p>The standard for managing image data in healthcare is the DICOM (Digital Imaging and Communications in Medicine) protocol. DICOM web-viewers provide flexible and accessible platforms for their users to view and analyze DICOM images remotely. This article presents a comprehensive evaluation of various web-based DICOM viewers, emphasizing their performance in different rendering scenarios, browsers, and operating systems. The study includes a total of 16 web-based viewers, of which 12 were surveyed, and 7 were compared performance-wise based on the availability of an online demo. The criteria for examination include accessibility features, such as available information or requirements for usage, interface features, such as loading capabilities or cloud storage, two-dimensional (2D) viewing features, such as the ability to perform measurements or alter the viewing window, and three-dimensional (3D) viewing features, such as volume rendering or secondary reconstruction. Only 4 of the viewers allow for the viewing of local DICOM files in 3D (other than MPR(Multiplanar reconstruction)). Premium software offers a large amount of features with overall good performance. One of the free alternatives demonstrated the best efficiency in both 2D and 3D rendering but faces challenges with missing 3D rendering features in its interface, which is still in development. Other free options exhibited slower performance, especially in 2D rendering but have more ready-to-use features on their web app. The evaluation also underscores the importance of browser choice, with some browsers performing much better than the competition, and highlights the significance of hardware when dealing with rendering tasks.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-30DOI: 10.1007/s10278-024-01283-8
Sara Bouhafra, Hassan El Bahi
Brain tumor is a type of disease caused by uncontrolled cell proliferation in the brain leading to serious health issues such as memory loss and motor impairment. Therefore, early diagnosis of brain tumors plays a crucial role to extend the survival of patients. However, given the busy nature of the work of radiologists and aiming to reduce the likelihood of false diagnoses, advancing technologies including computer-aided diagnosis and artificial intelligence have shown an important role in assisting radiologists. In recent years, a number of deep learning-based methods have been applied for brain tumor detection and classification using MRI images and achieved promising results. The main objective of this paper is to present a detailed review of the previous researches in this field. In addition, This work summarizes the existing limitations and significant highlights. The study systematically reviews 60 articles researches published between 2020 and January 2024, extensively covering methods such as transfer learning, autoencoders, transformers, and attention mechanisms. The key findings formulated in this paper provide an analytic comparison and future directions. The review aims to provide a comprehensive understanding of automatic techniques that may be useful for professionals and academic communities working on brain tumor classification and detection.
{"title":"Deep Learning Approaches for Brain Tumor Detection and Classification Using MRI Images (2020 to 2024): A Systematic Review.","authors":"Sara Bouhafra, Hassan El Bahi","doi":"10.1007/s10278-024-01283-8","DOIUrl":"https://doi.org/10.1007/s10278-024-01283-8","url":null,"abstract":"<p><p>Brain tumor is a type of disease caused by uncontrolled cell proliferation in the brain leading to serious health issues such as memory loss and motor impairment. Therefore, early diagnosis of brain tumors plays a crucial role to extend the survival of patients. However, given the busy nature of the work of radiologists and aiming to reduce the likelihood of false diagnoses, advancing technologies including computer-aided diagnosis and artificial intelligence have shown an important role in assisting radiologists. In recent years, a number of deep learning-based methods have been applied for brain tumor detection and classification using MRI images and achieved promising results. The main objective of this paper is to present a detailed review of the previous researches in this field. In addition, This work summarizes the existing limitations and significant highlights. The study systematically reviews 60 articles researches published between 2020 and January 2024, extensively covering methods such as transfer learning, autoencoders, transformers, and attention mechanisms. The key findings formulated in this paper provide an analytic comparison and future directions. The review aims to provide a comprehensive understanding of automatic techniques that may be useful for professionals and academic communities working on brain tumor classification and detection.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}