Amyloid plaques, implicated in Alzheimer's disease, exhibit a spatial propagation pattern through interconnected brain regions, suggesting network-driven dissemination. This study utilizes PET imaging to investigate these brain connections and introduces an innovative method for analyzing the amyloid network. A modified version of a previously established method is applied to explore distinctive patterns of connectivity alterations across cognitive performance domains. PET images illustrate differences in amyloid accumulation, complemented by quantitative network indices. The normal control group shows minimal amyloid accumulation and preserved network connectivity. The MCI group displays intermediate amyloid deposits and partial similarity to normal controls and AD patients, reflecting the evolving nature of cognitive decline. Alzheimer's disease patients exhibit high amyloid levels and pronounced disruptions in network connectivity, which are reflected in low levels of global efficiency (Eg) and local efficiency (Eloc). It is mostly in the temporal lobe where connectivity alterations are found, particularly in regions related to memory and cognition. Network connectivity alterations, combined with amyloid PET imaging, show potential as discriminative markers for different cognitive states. Dataset-specific variations must be considered when interpreting connectivity patterns. The variability in MCI and AD overlap emphasizes the heterogeneity in cognitive decline progression, suggesting personalized approaches for neurodegenerative disorders. This study contributes to understanding the evolving network characteristics associated with normal cognition, MCI, and AD, offering valuable insights for developing diagnostic and prognostic markers.
与阿尔茨海默病有关的淀粉样蛋白斑块在相互连接的脑区中呈现出空间传播模式,这表明淀粉样蛋白斑块的传播是由网络驱动的。本研究利用正电子发射计算机断层成像技术研究这些大脑连接,并引入了一种创新的淀粉样蛋白网络分析方法。该方法是对以前建立的方法的改进版,用于探索不同认知能力领域的连接改变的独特模式。PET 图像显示了淀粉样蛋白积累的差异,并辅以定量网络指数。正常对照组显示出最小的淀粉样蛋白积累和保留的网络连通性。MCI 组显示出中等程度的淀粉样蛋白沉积,与正常对照组和阿兹海默症患者部分相似,反映出认知功能衰退的演变性质。阿尔茨海默病患者的淀粉样蛋白水平较高,网络连通性受到明显破坏,这反映在全局效率(Eg)和局部效率(Eloc)水平较低。连接性改变主要发生在颞叶,尤其是与记忆和认知相关的区域。网络连通性改变与淀粉样蛋白 PET 成像相结合,显示出作为不同认知状态判别标志物的潜力。在解释连通性模式时,必须考虑数据集的特定差异。MCI和AD重叠的变异性强调了认知功能衰退进展的异质性,建议采用个性化方法治疗神经退行性疾病。这项研究有助于了解与正常认知、MCI 和 AD 相关的不断变化的网络特征,为开发诊断和预后标记物提供了宝贵的见解。
{"title":"Differences in Topography of Individual Amyloid Brain Networks by Amyloid PET Images in Healthy Control, Mild Cognitive Impairment, and Alzheimer's Disease.","authors":"Tsung-Ying Ho, Shu-Hua Huang, Chi-Wei Huang, Kun-Ju Lin, Jung-Lung Hsu, Kuo-Lun Huang, Ko-Ting Chen, Chiung-Chih Chang, Ing-Tsung Hsiao, Sheng-Yao Huang","doi":"10.1007/s10278-024-01230-7","DOIUrl":"https://doi.org/10.1007/s10278-024-01230-7","url":null,"abstract":"<p><p>Amyloid plaques, implicated in Alzheimer's disease, exhibit a spatial propagation pattern through interconnected brain regions, suggesting network-driven dissemination. This study utilizes PET imaging to investigate these brain connections and introduces an innovative method for analyzing the amyloid network. A modified version of a previously established method is applied to explore distinctive patterns of connectivity alterations across cognitive performance domains. PET images illustrate differences in amyloid accumulation, complemented by quantitative network indices. The normal control group shows minimal amyloid accumulation and preserved network connectivity. The MCI group displays intermediate amyloid deposits and partial similarity to normal controls and AD patients, reflecting the evolving nature of cognitive decline. Alzheimer's disease patients exhibit high amyloid levels and pronounced disruptions in network connectivity, which are reflected in low levels of global efficiency (Eg) and local efficiency (Eloc). It is mostly in the temporal lobe where connectivity alterations are found, particularly in regions related to memory and cognition. Network connectivity alterations, combined with amyloid PET imaging, show potential as discriminative markers for different cognitive states. Dataset-specific variations must be considered when interpreting connectivity patterns. The variability in MCI and AD overlap emphasizes the heterogeneity in cognitive decline progression, suggesting personalized approaches for neurodegenerative disorders. This study contributes to understanding the evolving network characteristics associated with normal cognition, MCI, and AD, offering valuable insights for developing diagnostic and prognostic markers.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142135009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-04DOI: 10.1007/s10278-024-01213-8
Feixiang Zhao, Mingzhe Liu, Mingrong Xiang, Dongfen Li, Xin Jiang, Xiance Jin, Cai Lin, Ruili Wang
In recent years, X-ray low-dose computed tomography (LDCT) has garnered widespread attention due to its significant reduction in the risk of patient radiation exposure. However, LDCT images often contain a substantial amount of noises, adversely affecting diagnostic quality. To mitigate this, a plethora of LDCT denoising methods have been proposed. Among them, deep learning (DL) approaches have emerged as the most effective, due to their robust feature extraction capabilities. Yet, the prevalent use of supervised training paradigms is often impractical due to the challenges in acquiring low-dose and normal-dose CT pairs in clinical settings. Consequently, unsupervised and self-supervised deep learning methods have been introduced for LDCT denoising, showing considerable potential for clinical applications. These methods' efficacy hinges on training strategies. Notably, there appears to be no comprehensive reviews of these strategies. Our review aims to address this gap, offering insights and guidance for researchers and practitioners. Based on training strategies, we categorize the LDCT methods into six groups: (i) cycle consistency-based, (ii) score matching-based, (iii) statistical characteristics of noise-based, (iv) similarity-based, (v) LDCT synthesis model-based, and (vi) hybrid methods. For each category, we delve into the theoretical underpinnings, training strategies, strengths, and limitations. In addition, we also summarize the open source codes of the reviewed methods. Finally, the review concludes with a discussion on open issues and future research directions.
{"title":"Unsupervised and Self-supervised Learning in Low-Dose Computed Tomography Denoising: Insights from Training Strategies.","authors":"Feixiang Zhao, Mingzhe Liu, Mingrong Xiang, Dongfen Li, Xin Jiang, Xiance Jin, Cai Lin, Ruili Wang","doi":"10.1007/s10278-024-01213-8","DOIUrl":"https://doi.org/10.1007/s10278-024-01213-8","url":null,"abstract":"<p><p>In recent years, X-ray low-dose computed tomography (LDCT) has garnered widespread attention due to its significant reduction in the risk of patient radiation exposure. However, LDCT images often contain a substantial amount of noises, adversely affecting diagnostic quality. To mitigate this, a plethora of LDCT denoising methods have been proposed. Among them, deep learning (DL) approaches have emerged as the most effective, due to their robust feature extraction capabilities. Yet, the prevalent use of supervised training paradigms is often impractical due to the challenges in acquiring low-dose and normal-dose CT pairs in clinical settings. Consequently, unsupervised and self-supervised deep learning methods have been introduced for LDCT denoising, showing considerable potential for clinical applications. These methods' efficacy hinges on training strategies. Notably, there appears to be no comprehensive reviews of these strategies. Our review aims to address this gap, offering insights and guidance for researchers and practitioners. Based on training strategies, we categorize the LDCT methods into six groups: (i) cycle consistency-based, (ii) score matching-based, (iii) statistical characteristics of noise-based, (iv) similarity-based, (v) LDCT synthesis model-based, and (vi) hybrid methods. For each category, we delve into the theoretical underpinnings, training strategies, strengths, and limitations. In addition, we also summarize the open source codes of the reviewed methods. Finally, the review concludes with a discussion on open issues and future research directions.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142135012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-04DOI: 10.1007/s10278-024-01238-z
Chuandong Qin, Yiqing Zhang
Iris recognition, renowned for its exceptional precision, has been extensively utilized across diverse industries. However, the presence of noise and blur frequently compromises the quality of iris images, thereby adversely affecting recognition accuracy. In this research, we have refined the traditional Wiener filter image restoration technique by integrating it with a gradient descent strategy, specifically employing the Barzilai-Borwein (BB) step size selection. This innovative approach is designed to enhance both the precision and resilience of iris recognition systems. The BB gradient method is adept at optimizing the parameters of the Wiener filter by introducing simulated blurring and noise conditions to the iris images. Through this process, it is capable of restoring images that have been degraded by blur and noise, leading to a significant improvement in the clarity of the restored images and, consequently, a notable elevation in recognition performance. The results of our experiments have demonstrated that this advanced method surpasses conventional filtering techniques in terms of both subjective visual quality assessments and objective peak signal-to-noise ratio (PSNR) evaluations.
{"title":"Application of Wiener Filter Based on Improved BB Gradient Descent in Iris Image Restoration.","authors":"Chuandong Qin, Yiqing Zhang","doi":"10.1007/s10278-024-01238-z","DOIUrl":"https://doi.org/10.1007/s10278-024-01238-z","url":null,"abstract":"<p><p>Iris recognition, renowned for its exceptional precision, has been extensively utilized across diverse industries. However, the presence of noise and blur frequently compromises the quality of iris images, thereby adversely affecting recognition accuracy. In this research, we have refined the traditional Wiener filter image restoration technique by integrating it with a gradient descent strategy, specifically employing the Barzilai-Borwein (BB) step size selection. This innovative approach is designed to enhance both the precision and resilience of iris recognition systems. The BB gradient method is adept at optimizing the parameters of the Wiener filter by introducing simulated blurring and noise conditions to the iris images. Through this process, it is capable of restoring images that have been degraded by blur and noise, leading to a significant improvement in the clarity of the restored images and, consequently, a notable elevation in recognition performance. The results of our experiments have demonstrated that this advanced method surpasses conventional filtering techniques in terms of both subjective visual quality assessments and objective peak signal-to-noise ratio (PSNR) evaluations.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-03DOI: 10.1007/s10278-024-01239-y
Lalit Garia, Hariharan Muthusamy
Thermography is a non-invasive and non-contact method for detecting cancer in its initial stages by examining the temperature variation between both breasts. Preprocessing methods such as resizing, ROI (region of interest) segmentation, and augmentation are frequently used to enhance the accuracy of breast thermogram analysis. In this study, a modified U-Net architecture (DTCWAU-Net) that uses dual-tree complex wavelet transform (DTCWT) and attention gate for breast thermal image segmentation for frontal and lateral view thermograms, aiming to outline ROI for potential tumor detection, was proposed. The proposed approach achieved an average Dice coefficient of 93.03% and a sensitivity of 94.82%, showcasing its potential for accurate breast thermogram segmentation. Classification of breast thermograms into healthy or cancerous categories was carried out by extracting texture- and histogram-based features and deep features from segmented thermograms. Feature selection was performed using Neighborhood Component Analysis (NCA), followed by the application of machine learning classifiers. When compared to other state-of-the-art approaches for detecting breast cancer using a thermogram, the proposed methodology showed a higher accuracy of 99.90% for VGG16 deep features with NCA and Random Forest classifier. Simulation results expound that the proposed method can be used in breast cancer screening, facilitating early detection, and enhancing treatment outcomes.
{"title":"Dual-Tree Complex Wavelet Pooling and Attention-Based Modified U-Net Architecture for Automated Breast Thermogram Segmentation and Classification.","authors":"Lalit Garia, Hariharan Muthusamy","doi":"10.1007/s10278-024-01239-y","DOIUrl":"https://doi.org/10.1007/s10278-024-01239-y","url":null,"abstract":"<p><p>Thermography is a non-invasive and non-contact method for detecting cancer in its initial stages by examining the temperature variation between both breasts. Preprocessing methods such as resizing, ROI (region of interest) segmentation, and augmentation are frequently used to enhance the accuracy of breast thermogram analysis. In this study, a modified U-Net architecture (DTCWAU-Net) that uses dual-tree complex wavelet transform (DTCWT) and attention gate for breast thermal image segmentation for frontal and lateral view thermograms, aiming to outline ROI for potential tumor detection, was proposed. The proposed approach achieved an average Dice coefficient of 93.03% and a sensitivity of 94.82%, showcasing its potential for accurate breast thermogram segmentation. Classification of breast thermograms into healthy or cancerous categories was carried out by extracting texture- and histogram-based features and deep features from segmented thermograms. Feature selection was performed using Neighborhood Component Analysis (NCA), followed by the application of machine learning classifiers. When compared to other state-of-the-art approaches for detecting breast cancer using a thermogram, the proposed methodology showed a higher accuracy of 99.90% for VGG16 deep features with NCA and Random Forest classifier. Simulation results expound that the proposed method can be used in breast cancer screening, facilitating early detection, and enhancing treatment outcomes.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-03DOI: 10.1007/s10278-024-01192-w
Vincent-Béni Sèna Zossou, Freddy Houéhanou Rodrigue Gnangnon, Olivier Biaou, Florent de Vathaire, Rodrigue S Allodji, Eugène C Ezin
Liver cancer, a leading cause of cancer mortality, is often diagnosed by analyzing the grayscale variations in liver tissue across different computed tomography (CT) images. However, the intensity similarity can be strong, making it difficult for radiologists to visually identify hepatocellular carcinoma (HCC) and metastases. It is crucial for the management and prevention strategies to accurately differentiate between these two liver cancers. This study proposes an automated system using a convolutional neural network (CNN) to enhance diagnostic accuracy to detect HCC, metastasis, and healthy liver tissue. This system incorporates automatic segmentation and classification. The liver lesions segmentation model is implemented using residual attention U-Net. A 9-layer CNN classifier implements the lesions classification model. Its input is the combination of the results of the segmentation model with original images. The dataset included 300 patients, with 223 used to develop the segmentation model and 77 to test it. These 77 patients also served as inputs for the classification model, consisting of 20 HCC cases, 27 with metastasis, and 30 healthy. The system achieved a mean Dice score of in segmentation and a mean accuracy of in classification, both in the test phase. The proposed method is a preliminary study with great potential in helping radiologists diagnose liver cancers.
{"title":"Automatic Diagnosis of Hepatocellular Carcinoma and Metastases Based on Computed Tomography Images.","authors":"Vincent-Béni Sèna Zossou, Freddy Houéhanou Rodrigue Gnangnon, Olivier Biaou, Florent de Vathaire, Rodrigue S Allodji, Eugène C Ezin","doi":"10.1007/s10278-024-01192-w","DOIUrl":"https://doi.org/10.1007/s10278-024-01192-w","url":null,"abstract":"<p><p>Liver cancer, a leading cause of cancer mortality, is often diagnosed by analyzing the grayscale variations in liver tissue across different computed tomography (CT) images. However, the intensity similarity can be strong, making it difficult for radiologists to visually identify hepatocellular carcinoma (HCC) and metastases. It is crucial for the management and prevention strategies to accurately differentiate between these two liver cancers. This study proposes an automated system using a convolutional neural network (CNN) to enhance diagnostic accuracy to detect HCC, metastasis, and healthy liver tissue. This system incorporates automatic segmentation and classification. The liver lesions segmentation model is implemented using residual attention U-Net. A 9-layer CNN classifier implements the lesions classification model. Its input is the combination of the results of the segmentation model with original images. The dataset included 300 patients, with 223 used to develop the segmentation model and 77 to test it. These 77 patients also served as inputs for the classification model, consisting of 20 HCC cases, 27 with metastasis, and 30 healthy. The system achieved a mean Dice score of <math><mrow><mn>87.65</mn> <mo>%</mo></mrow> </math> in segmentation and a mean accuracy of <math><mrow><mn>93.97</mn> <mo>%</mo></mrow> </math> in classification, both in the test phase. The proposed method is a preliminary study with great potential in helping radiologists diagnose liver cancers.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The objective is to evaluate the feasibility of utilizing ultrasound images in identifying critical prognostic biomarkers for HER2-positive breast cancer (HER2 + BC). This study enrolled 512 female patients diagnosed with HER2-positive breast cancer through pathological validation at our institution from January 2016 to December 2021. Five distinct deep convolutional neural networks (DCNNs) and a deep ensemble (DE) approach were trained to classify axillary lymph node involvement (ALNM), lymphovascular invasion (LVI), and histological grade (HG). The efficacy of the models was evaluated based on accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), receiver operating characteristic (ROC) curves, areas under the ROC curve (AUCs), and heat maps. DeLong test was applied to compare differences in AUC among different models. The deep ensemble approach, as the most effective model, demonstrated AUCs and accuracy of 0.869 (95% CI: 0.802-0.936) and 69.7% in LVI, 0.973 (95% CI: 0.949-0.998) and 73.8% in HG, thus providing superior classification performance in the context of imbalanced data (p < 0.05 by the DeLong test). On ALNM, AUC and accuracy were 0.780 (95% CI: 0.688-0.873) and 77.5%, which were comparable to other single models. The pretreatment US-based DE model could hold promise as a clinical guidance for predicting pathological characteristics of patients with HER2-positive breast cancer, thereby providing benefit of facilitating timely adjustments in treatment strategies.
{"title":"Predicting Pathological Characteristics of HER2-Positive Breast Cancer from Ultrasound Images: a Deep Ensemble Approach.","authors":"Zhi-Hui Chen, Hai-Ling Zha, Qing Yao, Wen-Bo Zhang, Guang-Quan Zhou, Cui-Ying Li","doi":"10.1007/s10278-024-01229-0","DOIUrl":"https://doi.org/10.1007/s10278-024-01229-0","url":null,"abstract":"<p><p>The objective is to evaluate the feasibility of utilizing ultrasound images in identifying critical prognostic biomarkers for HER2-positive breast cancer (HER2 + BC). This study enrolled 512 female patients diagnosed with HER2-positive breast cancer through pathological validation at our institution from January 2016 to December 2021. Five distinct deep convolutional neural networks (DCNNs) and a deep ensemble (DE) approach were trained to classify axillary lymph node involvement (ALNM), lymphovascular invasion (LVI), and histological grade (HG). The efficacy of the models was evaluated based on accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), receiver operating characteristic (ROC) curves, areas under the ROC curve (AUCs), and heat maps. DeLong test was applied to compare differences in AUC among different models. The deep ensemble approach, as the most effective model, demonstrated AUCs and accuracy of 0.869 (95% CI: 0.802-0.936) and 69.7% in LVI, 0.973 (95% CI: 0.949-0.998) and 73.8% in HG, thus providing superior classification performance in the context of imbalanced data (p < 0.05 by the DeLong test). On ALNM, AUC and accuracy were 0.780 (95% CI: 0.688-0.873) and 77.5%, which were comparable to other single models. The pretreatment US-based DE model could hold promise as a clinical guidance for predicting pathological characteristics of patients with HER2-positive breast cancer, thereby providing benefit of facilitating timely adjustments in treatment strategies.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142074926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-26DOI: 10.1007/s10278-024-01240-5
Tony Felefly, Ziad Francis, Camille Roukoz, Georges Fares, Samir Achkar, Sandrine Yazbeck, Antoine Nasr, Manal Kordahi, Fares Azoury, Dolly Nehme Nasr, Elie Nasr, Georges Noël
Dedicated brain imaging for cancer patients is seldom recommended in the absence of symptoms. There is increasing availability of non-enhanced CT (NE-CT) of the brain, mainly owing to a wider utilization of Positron Emission Tomography-CT (PET-CT) in cancer staging. Brain metastases (BM) are often hard to diagnose on NE-CT. This work aims to develop a 3D Convolutional Neural Network (3D-CNN) based on brain NE-CT to distinguish patients with and without BM. We retrospectively included NE-CT scans for 100 patients with single or multiple BM and 100 patients without brain imaging abnormalities. Patients whose largest lesion was < 5 mm were excluded. The largest tumor was manually segmented on a matched contrast-enhanced T1 weighted Magnetic Resonance Imaging (MRI), and shape radiomics were extracted to determine the size and volume of the lesion. The brain was automatically segmented, and masked images were normalized and resampled. The dataset was split into training (70%) and validation (30%) sets. Multiple versions of a 3D-CNN were developed, and the best model was selected based on accuracy (ACC) on the validation set. The median largest tumor Maximum-3D-Diameter was 2.29 cm, and its median volume was 2.81 cc. Solitary BM were found in 27% of the patients, while 49% had > 5 BMs. The best model consisted of 4 convolutional layers with 3D average pooling layers, dropout layers of 50%, and a sigmoid activation function. Mean validation ACC was 0.983 (SD: 0.020) and mean area under receiver-operating characteristic curve was 0.983 (SD: 0.023). Sensitivity was 0.983 (SD: 0.020). We developed an accurate 3D-CNN based on brain NE-CT to differentiate between patients with and without BM. The model merits further external validation.
{"title":"A 3D Convolutional Neural Network Based on Non-enhanced Brain CT to Identify Patients with Brain Metastases.","authors":"Tony Felefly, Ziad Francis, Camille Roukoz, Georges Fares, Samir Achkar, Sandrine Yazbeck, Antoine Nasr, Manal Kordahi, Fares Azoury, Dolly Nehme Nasr, Elie Nasr, Georges Noël","doi":"10.1007/s10278-024-01240-5","DOIUrl":"https://doi.org/10.1007/s10278-024-01240-5","url":null,"abstract":"<p><p>Dedicated brain imaging for cancer patients is seldom recommended in the absence of symptoms. There is increasing availability of non-enhanced CT (NE-CT) of the brain, mainly owing to a wider utilization of Positron Emission Tomography-CT (PET-CT) in cancer staging. Brain metastases (BM) are often hard to diagnose on NE-CT. This work aims to develop a 3D Convolutional Neural Network (3D-CNN) based on brain NE-CT to distinguish patients with and without BM. We retrospectively included NE-CT scans for 100 patients with single or multiple BM and 100 patients without brain imaging abnormalities. Patients whose largest lesion was < 5 mm were excluded. The largest tumor was manually segmented on a matched contrast-enhanced T1 weighted Magnetic Resonance Imaging (MRI), and shape radiomics were extracted to determine the size and volume of the lesion. The brain was automatically segmented, and masked images were normalized and resampled. The dataset was split into training (70%) and validation (30%) sets. Multiple versions of a 3D-CNN were developed, and the best model was selected based on accuracy (ACC) on the validation set. The median largest tumor Maximum-3D-Diameter was 2.29 cm, and its median volume was 2.81 cc. Solitary BM were found in 27% of the patients, while 49% had > 5 BMs. The best model consisted of 4 convolutional layers with 3D average pooling layers, dropout layers of 50%, and a sigmoid activation function. Mean validation ACC was 0.983 (SD: 0.020) and mean area under receiver-operating characteristic curve was 0.983 (SD: 0.023). Sensitivity was 0.983 (SD: 0.020). We developed an accurate 3D-CNN based on brain NE-CT to differentiate between patients with and without BM. The model merits further external validation.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142074924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-26DOI: 10.1007/s10278-024-01220-9
Guy Hembroff, Chad Klochko, Joseph Craig, Harikrishnan Changarnkothapeecherikkal, Richard Q Loi
Radiographic quality control is an integral component of the radiology workflow. In this study, we developed a convolutional neural network model tailored for automated quality control, specifically designed to detect and classify key attributes of wrist radiographs including projection, laterality (based on the right/left marker), and the presence of hardware and/or casts. The model's primary objective was to ensure the congruence of results with image requisition metadata to pass the quality assessment. Using a dataset of 6283 wrist radiographs from 2591 patients, our multitask-capable deep learning model based on DenseNet 121 architecture achieved high accuracy in classifying projections (F1 Score of 97.23%), detecting casts (F1 Score of 97.70%), and identifying surgical hardware (F1 Score of 92.27%). The model's performance in laterality marker detection was lower (F1 Score of 82.52%), particularly for partially visible or cut-off markers. This paper presents a comprehensive evaluation of our model's performance, highlighting its strengths, limitations, and the challenges encountered during its development and implementation. Furthermore, we outline planned future research directions aimed at refining and expanding the model's capabilities for improved clinical utility and patient care in radiographic quality control.
{"title":"Improved Automated Quality Control of Skeletal Wrist Radiographs Using Deep Multitask Learning.","authors":"Guy Hembroff, Chad Klochko, Joseph Craig, Harikrishnan Changarnkothapeecherikkal, Richard Q Loi","doi":"10.1007/s10278-024-01220-9","DOIUrl":"https://doi.org/10.1007/s10278-024-01220-9","url":null,"abstract":"<p><p>Radiographic quality control is an integral component of the radiology workflow. In this study, we developed a convolutional neural network model tailored for automated quality control, specifically designed to detect and classify key attributes of wrist radiographs including projection, laterality (based on the right/left marker), and the presence of hardware and/or casts. The model's primary objective was to ensure the congruence of results with image requisition metadata to pass the quality assessment. Using a dataset of 6283 wrist radiographs from 2591 patients, our multitask-capable deep learning model based on DenseNet 121 architecture achieved high accuracy in classifying projections (F1 Score of 97.23%), detecting casts (F1 Score of 97.70%), and identifying surgical hardware (F1 Score of 92.27%). The model's performance in laterality marker detection was lower (F1 Score of 82.52%), particularly for partially visible or cut-off markers. This paper presents a comprehensive evaluation of our model's performance, highlighting its strengths, limitations, and the challenges encountered during its development and implementation. Furthermore, we outline planned future research directions aimed at refining and expanding the model's capabilities for improved clinical utility and patient care in radiographic quality control.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142074925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-26DOI: 10.1007/s10278-024-01242-3
Noriko Kanemaru, Koichiro Yasaka, Nana Fujita, Jun Kanzawa, Osamu Abe
Early detection of patients with impending bone metastasis is crucial for prognosis improvement. This study aimed to investigate the feasibility of a fine-tuned, locally run large language model (LLM) in extracting patients with bone metastasis in unstructured Japanese radiology report and to compare its performance with manual annotation. This retrospective study included patients with "metastasis" in radiological reports (April 2018-January 2019, August-May 2022, and April-December 2023 for training, validation, and test datasets of 9559, 1498, and 7399 patients, respectively). Radiologists reviewed the clinical indication and diagnosis sections of the radiological report (used as input data) and classified them into groups 0 (no bone metastasis), 1 (progressive bone metastasis), and 2 (stable or decreased bone metastasis). The data for group 0 was under-sampled in training and test datasets due to group imbalance. The best-performing model from the validation set was subsequently tested using the testing dataset. Two additional radiologists (readers 1 and 2) were involved in classifying radiological reports within the test dataset for testing purposes. The fine-tuned LLM, reader 1, and reader 2 demonstrated an accuracy of 0.979, 0.996, and 0.993, sensitivity for groups 0/1/2 of 0.988/0.947/0.943, 1.000/1.000/0.966, and 1.000/0.982/0.954, and time required for classification (s) of 105, 2312, and 3094 in under-sampled test dataset (n = 711), respectively. Fine-tuned LLM extracted patients with bone metastasis, demonstrating satisfactory performance that was comparable to or slightly lower than manual annotation by radiologists in a noticeably shorter time.
{"title":"The Fine-Tuned Large Language Model for Extracting the Progressive Bone Metastasis from Unstructured Radiology Reports.","authors":"Noriko Kanemaru, Koichiro Yasaka, Nana Fujita, Jun Kanzawa, Osamu Abe","doi":"10.1007/s10278-024-01242-3","DOIUrl":"https://doi.org/10.1007/s10278-024-01242-3","url":null,"abstract":"<p><p>Early detection of patients with impending bone metastasis is crucial for prognosis improvement. This study aimed to investigate the feasibility of a fine-tuned, locally run large language model (LLM) in extracting patients with bone metastasis in unstructured Japanese radiology report and to compare its performance with manual annotation. This retrospective study included patients with \"metastasis\" in radiological reports (April 2018-January 2019, August-May 2022, and April-December 2023 for training, validation, and test datasets of 9559, 1498, and 7399 patients, respectively). Radiologists reviewed the clinical indication and diagnosis sections of the radiological report (used as input data) and classified them into groups 0 (no bone metastasis), 1 (progressive bone metastasis), and 2 (stable or decreased bone metastasis). The data for group 0 was under-sampled in training and test datasets due to group imbalance. The best-performing model from the validation set was subsequently tested using the testing dataset. Two additional radiologists (readers 1 and 2) were involved in classifying radiological reports within the test dataset for testing purposes. The fine-tuned LLM, reader 1, and reader 2 demonstrated an accuracy of 0.979, 0.996, and 0.993, sensitivity for groups 0/1/2 of 0.988/0.947/0.943, 1.000/1.000/0.966, and 1.000/0.982/0.954, and time required for classification (s) of 105, 2312, and 3094 in under-sampled test dataset (n = 711), respectively. Fine-tuned LLM extracted patients with bone metastasis, demonstrating satisfactory performance that was comparable to or slightly lower than manual annotation by radiologists in a noticeably shorter time.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142074927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1007/s10278-024-01236-1
Eric K Lai, Evan Slavik, Bessie Ganim, Laurie A Perry, Caitlin Treuting, Troy Dee, Melissa Osborne, Cieara Presley, Alexander J Towbin
The widespread availability of smart devices has facilitated the use of medical photography, yet photodocumentation workflows are seldom implemented in healthcare organizations due to integration challenges with electronic health records (EHR) and standard clinical workflows. This manuscript details the implementation of a comprehensive photodocumentation workflow across all phases of care at a large healthcare organization, emphasizing efficiency and patient safety. From November 2018 to December 2023, healthcare workers at our institution uploaded nearly 32,000 photodocuments spanning 54 medical specialties. The photodocumentation process requires as few as 11 mouse clicks and keystrokes within the EHR and on smart devices. Automation played a crucial role in driving workflow efficiency and patient safety. For example, body part rules were used to automate the application of a sensitive label to photos of the face, chest, external genitalia, and buttocks. This automation was successful, with over 50% of the uploaded photodocuments being labeled as sensitive. Our implementation highlights the potential for standardizing photodocumentation workflows, thereby enhancing clinical documentation, improving patient care, and ensuring the secure handling of sensitive images.
{"title":"Implementing a Photodocumentation Program.","authors":"Eric K Lai, Evan Slavik, Bessie Ganim, Laurie A Perry, Caitlin Treuting, Troy Dee, Melissa Osborne, Cieara Presley, Alexander J Towbin","doi":"10.1007/s10278-024-01236-1","DOIUrl":"https://doi.org/10.1007/s10278-024-01236-1","url":null,"abstract":"<p><p>The widespread availability of smart devices has facilitated the use of medical photography, yet photodocumentation workflows are seldom implemented in healthcare organizations due to integration challenges with electronic health records (EHR) and standard clinical workflows. This manuscript details the implementation of a comprehensive photodocumentation workflow across all phases of care at a large healthcare organization, emphasizing efficiency and patient safety. From November 2018 to December 2023, healthcare workers at our institution uploaded nearly 32,000 photodocuments spanning 54 medical specialties. The photodocumentation process requires as few as 11 mouse clicks and keystrokes within the EHR and on smart devices. Automation played a crucial role in driving workflow efficiency and patient safety. For example, body part rules were used to automate the application of a sensitive label to photos of the face, chest, external genitalia, and buttocks. This automation was successful, with over 50% of the uploaded photodocuments being labeled as sensitive. Our implementation highlights the potential for standardizing photodocumentation workflows, thereby enhancing clinical documentation, improving patient care, and ensuring the secure handling of sensitive images.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142038686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}