首页 > 最新文献

Journal of imaging informatics in medicine最新文献

英文 中文
Differences in Topography of Individual Amyloid Brain Networks by Amyloid PET Images in Healthy Control, Mild Cognitive Impairment, and Alzheimer's Disease. 通过淀粉样蛋白 PET 图像观察健康对照组、轻度认知障碍患者和阿尔茨海默氏症患者个体淀粉样蛋白脑网络拓扑结构的差异。
Pub Date : 2024-09-04 DOI: 10.1007/s10278-024-01230-7
Tsung-Ying Ho, Shu-Hua Huang, Chi-Wei Huang, Kun-Ju Lin, Jung-Lung Hsu, Kuo-Lun Huang, Ko-Ting Chen, Chiung-Chih Chang, Ing-Tsung Hsiao, Sheng-Yao Huang

Amyloid plaques, implicated in Alzheimer's disease, exhibit a spatial propagation pattern through interconnected brain regions, suggesting network-driven dissemination. This study utilizes PET imaging to investigate these brain connections and introduces an innovative method for analyzing the amyloid network. A modified version of a previously established method is applied to explore distinctive patterns of connectivity alterations across cognitive performance domains. PET images illustrate differences in amyloid accumulation, complemented by quantitative network indices. The normal control group shows minimal amyloid accumulation and preserved network connectivity. The MCI group displays intermediate amyloid deposits and partial similarity to normal controls and AD patients, reflecting the evolving nature of cognitive decline. Alzheimer's disease patients exhibit high amyloid levels and pronounced disruptions in network connectivity, which are reflected in low levels of global efficiency (Eg) and local efficiency (Eloc). It is mostly in the temporal lobe where connectivity alterations are found, particularly in regions related to memory and cognition. Network connectivity alterations, combined with amyloid PET imaging, show potential as discriminative markers for different cognitive states. Dataset-specific variations must be considered when interpreting connectivity patterns. The variability in MCI and AD overlap emphasizes the heterogeneity in cognitive decline progression, suggesting personalized approaches for neurodegenerative disorders. This study contributes to understanding the evolving network characteristics associated with normal cognition, MCI, and AD, offering valuable insights for developing diagnostic and prognostic markers.

与阿尔茨海默病有关的淀粉样蛋白斑块在相互连接的脑区中呈现出空间传播模式,这表明淀粉样蛋白斑块的传播是由网络驱动的。本研究利用正电子发射计算机断层成像技术研究这些大脑连接,并引入了一种创新的淀粉样蛋白网络分析方法。该方法是对以前建立的方法的改进版,用于探索不同认知能力领域的连接改变的独特模式。PET 图像显示了淀粉样蛋白积累的差异,并辅以定量网络指数。正常对照组显示出最小的淀粉样蛋白积累和保留的网络连通性。MCI 组显示出中等程度的淀粉样蛋白沉积,与正常对照组和阿兹海默症患者部分相似,反映出认知功能衰退的演变性质。阿尔茨海默病患者的淀粉样蛋白水平较高,网络连通性受到明显破坏,这反映在全局效率(Eg)和局部效率(Eloc)水平较低。连接性改变主要发生在颞叶,尤其是与记忆和认知相关的区域。网络连通性改变与淀粉样蛋白 PET 成像相结合,显示出作为不同认知状态判别标志物的潜力。在解释连通性模式时,必须考虑数据集的特定差异。MCI和AD重叠的变异性强调了认知功能衰退进展的异质性,建议采用个性化方法治疗神经退行性疾病。这项研究有助于了解与正常认知、MCI 和 AD 相关的不断变化的网络特征,为开发诊断和预后标记物提供了宝贵的见解。
{"title":"Differences in Topography of Individual Amyloid Brain Networks by Amyloid PET Images in Healthy Control, Mild Cognitive Impairment, and Alzheimer's Disease.","authors":"Tsung-Ying Ho, Shu-Hua Huang, Chi-Wei Huang, Kun-Ju Lin, Jung-Lung Hsu, Kuo-Lun Huang, Ko-Ting Chen, Chiung-Chih Chang, Ing-Tsung Hsiao, Sheng-Yao Huang","doi":"10.1007/s10278-024-01230-7","DOIUrl":"https://doi.org/10.1007/s10278-024-01230-7","url":null,"abstract":"<p><p>Amyloid plaques, implicated in Alzheimer's disease, exhibit a spatial propagation pattern through interconnected brain regions, suggesting network-driven dissemination. This study utilizes PET imaging to investigate these brain connections and introduces an innovative method for analyzing the amyloid network. A modified version of a previously established method is applied to explore distinctive patterns of connectivity alterations across cognitive performance domains. PET images illustrate differences in amyloid accumulation, complemented by quantitative network indices. The normal control group shows minimal amyloid accumulation and preserved network connectivity. The MCI group displays intermediate amyloid deposits and partial similarity to normal controls and AD patients, reflecting the evolving nature of cognitive decline. Alzheimer's disease patients exhibit high amyloid levels and pronounced disruptions in network connectivity, which are reflected in low levels of global efficiency (Eg) and local efficiency (Eloc). It is mostly in the temporal lobe where connectivity alterations are found, particularly in regions related to memory and cognition. Network connectivity alterations, combined with amyloid PET imaging, show potential as discriminative markers for different cognitive states. Dataset-specific variations must be considered when interpreting connectivity patterns. The variability in MCI and AD overlap emphasizes the heterogeneity in cognitive decline progression, suggesting personalized approaches for neurodegenerative disorders. This study contributes to understanding the evolving network characteristics associated with normal cognition, MCI, and AD, offering valuable insights for developing diagnostic and prognostic markers.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142135009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised and Self-supervised Learning in Low-Dose Computed Tomography Denoising: Insights from Training Strategies. 低剂量计算机断层扫描去噪中的无监督和自监督学习:训练策略的启示。
Pub Date : 2024-09-04 DOI: 10.1007/s10278-024-01213-8
Feixiang Zhao, Mingzhe Liu, Mingrong Xiang, Dongfen Li, Xin Jiang, Xiance Jin, Cai Lin, Ruili Wang

In recent years, X-ray low-dose computed tomography (LDCT) has garnered widespread attention due to its significant reduction in the risk of patient radiation exposure. However, LDCT images often contain a substantial amount of noises, adversely affecting diagnostic quality. To mitigate this, a plethora of LDCT denoising methods have been proposed. Among them, deep learning (DL) approaches have emerged as the most effective, due to their robust feature extraction capabilities. Yet, the prevalent use of supervised training paradigms is often impractical due to the challenges in acquiring low-dose and normal-dose CT pairs in clinical settings. Consequently, unsupervised and self-supervised deep learning methods have been introduced for LDCT denoising, showing considerable potential for clinical applications. These methods' efficacy hinges on training strategies. Notably, there appears to be no comprehensive reviews of these strategies. Our review aims to address this gap, offering insights and guidance for researchers and practitioners. Based on training strategies, we categorize the LDCT methods into six groups: (i) cycle consistency-based, (ii) score matching-based, (iii) statistical characteristics of noise-based, (iv) similarity-based, (v) LDCT synthesis model-based, and (vi) hybrid methods. For each category, we delve into the theoretical underpinnings, training strategies, strengths, and limitations. In addition, we also summarize the open source codes of the reviewed methods. Finally, the review concludes with a discussion on open issues and future research directions.

近年来,X 射线低剂量计算机断层扫描(LDCT)因其可显著降低患者的辐射风险而受到广泛关注。然而,LDCT 图像往往包含大量噪声,对诊断质量产生不利影响。为了缓解这一问题,人们提出了大量 LDCT 去噪方法。其中,深度学习(DL)方法因其强大的特征提取能力而成为最有效的方法。然而,由于在临床环境中获取低剂量和正常剂量 CT 对的挑战,普遍使用监督训练范例往往不切实际。因此,针对 LDCT 去噪引入了无监督和自监督深度学习方法,在临床应用中展现出了巨大的潜力。这些方法的有效性取决于训练策略。值得注意的是,目前似乎还没有关于这些策略的全面综述。我们的综述旨在填补这一空白,为研究人员和从业人员提供见解和指导。根据训练策略,我们将 LDCT 方法分为六类:(i) 基于周期一致性的方法;(ii) 基于分数匹配的方法;(iii) 基于噪声统计特征的方法;(iv) 基于相似性的方法;(v) 基于 LDCT 合成模型的方法;以及 (vi) 混合方法。对于每一类方法,我们都会深入探讨其理论基础、训练策略、优势和局限性。此外,我们还总结了所综述方法的开放源代码。最后,本综述以对未决问题和未来研究方向的讨论作结。
{"title":"Unsupervised and Self-supervised Learning in Low-Dose Computed Tomography Denoising: Insights from Training Strategies.","authors":"Feixiang Zhao, Mingzhe Liu, Mingrong Xiang, Dongfen Li, Xin Jiang, Xiance Jin, Cai Lin, Ruili Wang","doi":"10.1007/s10278-024-01213-8","DOIUrl":"https://doi.org/10.1007/s10278-024-01213-8","url":null,"abstract":"<p><p>In recent years, X-ray low-dose computed tomography (LDCT) has garnered widespread attention due to its significant reduction in the risk of patient radiation exposure. However, LDCT images often contain a substantial amount of noises, adversely affecting diagnostic quality. To mitigate this, a plethora of LDCT denoising methods have been proposed. Among them, deep learning (DL) approaches have emerged as the most effective, due to their robust feature extraction capabilities. Yet, the prevalent use of supervised training paradigms is often impractical due to the challenges in acquiring low-dose and normal-dose CT pairs in clinical settings. Consequently, unsupervised and self-supervised deep learning methods have been introduced for LDCT denoising, showing considerable potential for clinical applications. These methods' efficacy hinges on training strategies. Notably, there appears to be no comprehensive reviews of these strategies. Our review aims to address this gap, offering insights and guidance for researchers and practitioners. Based on training strategies, we categorize the LDCT methods into six groups: (i) cycle consistency-based, (ii) score matching-based, (iii) statistical characteristics of noise-based, (iv) similarity-based, (v) LDCT synthesis model-based, and (vi) hybrid methods. For each category, we delve into the theoretical underpinnings, training strategies, strengths, and limitations. In addition, we also summarize the open source codes of the reviewed methods. Finally, the review concludes with a discussion on open issues and future research directions.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142135012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of Wiener Filter Based on Improved BB Gradient Descent in Iris Image Restoration. 基于改进 BB 梯度下降的维纳滤波器在虹膜图像修复中的应用
Pub Date : 2024-09-04 DOI: 10.1007/s10278-024-01238-z
Chuandong Qin, Yiqing Zhang

Iris recognition, renowned for its exceptional precision, has been extensively utilized across diverse industries. However, the presence of noise and blur frequently compromises the quality of iris images, thereby adversely affecting recognition accuracy. In this research, we have refined the traditional Wiener filter image restoration technique by integrating it with a gradient descent strategy, specifically employing the Barzilai-Borwein (BB) step size selection. This innovative approach is designed to enhance both the precision and resilience of iris recognition systems. The BB gradient method is adept at optimizing the parameters of the Wiener filter by introducing simulated blurring and noise conditions to the iris images. Through this process, it is capable of restoring images that have been degraded by blur and noise, leading to a significant improvement in the clarity of the restored images and, consequently, a notable elevation in recognition performance. The results of our experiments have demonstrated that this advanced method surpasses conventional filtering techniques in terms of both subjective visual quality assessments and objective peak signal-to-noise ratio (PSNR) evaluations.

虹膜识别以其卓越的精确性而闻名,已被广泛应用于各个行业。然而,噪声和模糊的存在经常会影响虹膜图像的质量,从而对识别精度造成不利影响。在这项研究中,我们改进了传统的维纳滤波图像修复技术,将其与梯度下降策略相结合,特别是采用了 Barzilai-Borwein (BB) 步长选择。这种创新方法旨在提高虹膜识别系统的精度和复原能力。BB 梯度法善于通过在虹膜图像中引入模拟模糊和噪声条件来优化维纳滤波器的参数。通过这一过程,它能够恢复因模糊和噪声而退化的图像,从而显著提高恢复图像的清晰度,进而显著提升识别性能。我们的实验结果表明,这种先进的方法在主观视觉质量评估和客观峰值信噪比(PSNR)评估方面都超越了传统的过滤技术。
{"title":"Application of Wiener Filter Based on Improved BB Gradient Descent in Iris Image Restoration.","authors":"Chuandong Qin, Yiqing Zhang","doi":"10.1007/s10278-024-01238-z","DOIUrl":"https://doi.org/10.1007/s10278-024-01238-z","url":null,"abstract":"<p><p>Iris recognition, renowned for its exceptional precision, has been extensively utilized across diverse industries. However, the presence of noise and blur frequently compromises the quality of iris images, thereby adversely affecting recognition accuracy. In this research, we have refined the traditional Wiener filter image restoration technique by integrating it with a gradient descent strategy, specifically employing the Barzilai-Borwein (BB) step size selection. This innovative approach is designed to enhance both the precision and resilience of iris recognition systems. The BB gradient method is adept at optimizing the parameters of the Wiener filter by introducing simulated blurring and noise conditions to the iris images. Through this process, it is capable of restoring images that have been degraded by blur and noise, leading to a significant improvement in the clarity of the restored images and, consequently, a notable elevation in recognition performance. The results of our experiments have demonstrated that this advanced method surpasses conventional filtering techniques in terms of both subjective visual quality assessments and objective peak signal-to-noise ratio (PSNR) evaluations.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-Tree Complex Wavelet Pooling and Attention-Based Modified U-Net Architecture for Automated Breast Thermogram Segmentation and Classification. 用于自动乳腺热图分割和分类的双树复合小波池化和基于注意力的修正 U-Net 架构
Pub Date : 2024-09-03 DOI: 10.1007/s10278-024-01239-y
Lalit Garia, Hariharan Muthusamy

Thermography is a non-invasive and non-contact method for detecting cancer in its initial stages by examining the temperature variation between both breasts. Preprocessing methods such as resizing, ROI (region of interest) segmentation, and augmentation are frequently used to enhance the accuracy of breast thermogram analysis. In this study, a modified U-Net architecture (DTCWAU-Net) that uses dual-tree complex wavelet transform (DTCWT) and attention gate for breast thermal image segmentation for frontal and lateral view thermograms, aiming to outline ROI for potential tumor detection, was proposed. The proposed approach achieved an average Dice coefficient of 93.03% and a sensitivity of 94.82%, showcasing its potential for accurate breast thermogram segmentation. Classification of breast thermograms into healthy or cancerous categories was carried out by extracting texture- and histogram-based features and deep features from segmented thermograms. Feature selection was performed using Neighborhood Component Analysis (NCA), followed by the application of machine learning classifiers. When compared to other state-of-the-art approaches for detecting breast cancer using a thermogram, the proposed methodology showed a higher accuracy of 99.90% for VGG16 deep features with NCA and Random Forest classifier. Simulation results expound that the proposed method can be used in breast cancer screening, facilitating early detection, and enhancing treatment outcomes.

乳房热成像是一种非侵入性、非接触式方法,可通过检测双侧乳房的温度变化,在癌症初期对其进行检测。为了提高乳房热成像分析的准确性,经常使用调整大小、ROI(感兴趣区)分割和增强等预处理方法。本研究提出了一种改进的 U-Net 架构(DTCWAU-Net),该架构使用双树复小波变换(DTCWT)和注意门对正面和侧面视图的乳房热图像进行分割,旨在勾勒出潜在肿瘤检测的 ROI。该方法的平均骰子系数(Dice coefficient)为 93.03%,灵敏度为 94.82%,展示了其准确分割乳房热图像的潜力。通过从分割的热图中提取基于纹理和直方图的特征以及深度特征,将乳房热图分类为健康或癌症类别。特征选择采用邻域成分分析法(NCA),然后应用机器学习分类器。与其他利用温度图检测乳腺癌的先进方法相比,所提出的方法在使用 NCA 和随机森林分类器检测 VGG16 深度特征时,准确率高达 99.90%。仿真结果表明,所提出的方法可用于乳腺癌筛查,促进早期检测并提高治疗效果。
{"title":"Dual-Tree Complex Wavelet Pooling and Attention-Based Modified U-Net Architecture for Automated Breast Thermogram Segmentation and Classification.","authors":"Lalit Garia, Hariharan Muthusamy","doi":"10.1007/s10278-024-01239-y","DOIUrl":"https://doi.org/10.1007/s10278-024-01239-y","url":null,"abstract":"<p><p>Thermography is a non-invasive and non-contact method for detecting cancer in its initial stages by examining the temperature variation between both breasts. Preprocessing methods such as resizing, ROI (region of interest) segmentation, and augmentation are frequently used to enhance the accuracy of breast thermogram analysis. In this study, a modified U-Net architecture (DTCWAU-Net) that uses dual-tree complex wavelet transform (DTCWT) and attention gate for breast thermal image segmentation for frontal and lateral view thermograms, aiming to outline ROI for potential tumor detection, was proposed. The proposed approach achieved an average Dice coefficient of 93.03% and a sensitivity of 94.82%, showcasing its potential for accurate breast thermogram segmentation. Classification of breast thermograms into healthy or cancerous categories was carried out by extracting texture- and histogram-based features and deep features from segmented thermograms. Feature selection was performed using Neighborhood Component Analysis (NCA), followed by the application of machine learning classifiers. When compared to other state-of-the-art approaches for detecting breast cancer using a thermogram, the proposed methodology showed a higher accuracy of 99.90% for VGG16 deep features with NCA and Random Forest classifier. Simulation results expound that the proposed method can be used in breast cancer screening, facilitating early detection, and enhancing treatment outcomes.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Diagnosis of Hepatocellular Carcinoma and Metastases Based on Computed Tomography Images. 基于计算机断层扫描图像的肝细胞癌和转移瘤自动诊断。
Pub Date : 2024-09-03 DOI: 10.1007/s10278-024-01192-w
Vincent-Béni Sèna Zossou, Freddy Houéhanou Rodrigue Gnangnon, Olivier Biaou, Florent de Vathaire, Rodrigue S Allodji, Eugène C Ezin

Liver cancer, a leading cause of cancer mortality, is often diagnosed by analyzing the grayscale variations in liver tissue across different computed tomography (CT) images. However, the intensity similarity can be strong, making it difficult for radiologists to visually identify hepatocellular carcinoma (HCC) and metastases. It is crucial for the management and prevention strategies to accurately differentiate between these two liver cancers. This study proposes an automated system using a convolutional neural network (CNN) to enhance diagnostic accuracy to detect HCC, metastasis, and healthy liver tissue. This system incorporates automatic segmentation and classification. The liver lesions segmentation model is implemented using residual attention U-Net. A 9-layer CNN classifier implements the lesions classification model. Its input is the combination of the results of the segmentation model with original images. The dataset included 300 patients, with 223 used to develop the segmentation model and 77 to test it. These 77 patients also served as inputs for the classification model, consisting of 20 HCC cases, 27 with metastasis, and 30 healthy. The system achieved a mean Dice score of 87.65 % in segmentation and a mean accuracy of 93.97 % in classification, both in the test phase. The proposed method is a preliminary study with great potential in helping radiologists diagnose liver cancers.

肝癌是癌症死亡的主要原因之一,通常通过分析不同计算机断层扫描(CT)图像中肝脏组织的灰度变化来诊断肝癌。然而,由于强度相似性很强,放射科医生很难直观地识别肝细胞癌(HCC)和转移灶。准确区分这两种肝癌对管理和预防策略至关重要。本研究提出了一种使用卷积神经网络(CNN)的自动化系统,以提高检测 HCC、转移灶和健康肝组织的诊断准确性。该系统包括自动分割和分类。肝脏病变分割模型使用剩余注意力 U-Net 实现。病变分类模型由一个 9 层 CNN 分类器实现。其输入是分割模型结果与原始图像的组合。数据集包括 300 名患者,其中 223 名用于开发分割模型,77 名用于测试。这 77 名患者也是分类模型的输入,其中包括 20 个 HCC 病例、27 个转移病例和 30 个健康病例。在测试阶段,该系统在分割方面的平均 Dice 得分为 87.65%,在分类方面的平均准确率为 93.97%。所提出的方法是一项初步研究,在帮助放射科医生诊断肝癌方面具有巨大潜力。
{"title":"Automatic Diagnosis of Hepatocellular Carcinoma and Metastases Based on Computed Tomography Images.","authors":"Vincent-Béni Sèna Zossou, Freddy Houéhanou Rodrigue Gnangnon, Olivier Biaou, Florent de Vathaire, Rodrigue S Allodji, Eugène C Ezin","doi":"10.1007/s10278-024-01192-w","DOIUrl":"https://doi.org/10.1007/s10278-024-01192-w","url":null,"abstract":"<p><p>Liver cancer, a leading cause of cancer mortality, is often diagnosed by analyzing the grayscale variations in liver tissue across different computed tomography (CT) images. However, the intensity similarity can be strong, making it difficult for radiologists to visually identify hepatocellular carcinoma (HCC) and metastases. It is crucial for the management and prevention strategies to accurately differentiate between these two liver cancers. This study proposes an automated system using a convolutional neural network (CNN) to enhance diagnostic accuracy to detect HCC, metastasis, and healthy liver tissue. This system incorporates automatic segmentation and classification. The liver lesions segmentation model is implemented using residual attention U-Net. A 9-layer CNN classifier implements the lesions classification model. Its input is the combination of the results of the segmentation model with original images. The dataset included 300 patients, with 223 used to develop the segmentation model and 77 to test it. These 77 patients also served as inputs for the classification model, consisting of 20 HCC cases, 27 with metastasis, and 30 healthy. The system achieved a mean Dice score of <math><mrow><mn>87.65</mn> <mo>%</mo></mrow> </math> in segmentation and a mean accuracy of <math><mrow><mn>93.97</mn> <mo>%</mo></mrow> </math> in classification, both in the test phase. The proposed method is a preliminary study with great potential in helping radiologists diagnose liver cancers.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Pathological Characteristics of HER2-Positive Breast Cancer from Ultrasound Images: a Deep Ensemble Approach. 从超声图像预测 HER2 阳性乳腺癌的病理特征:一种深度集合方法
Pub Date : 2024-08-26 DOI: 10.1007/s10278-024-01229-0
Zhi-Hui Chen, Hai-Ling Zha, Qing Yao, Wen-Bo Zhang, Guang-Quan Zhou, Cui-Ying Li

The objective is to evaluate the feasibility of utilizing ultrasound images in identifying critical prognostic biomarkers for HER2-positive breast cancer (HER2 + BC). This study enrolled 512 female patients diagnosed with HER2-positive breast cancer through pathological validation at our institution from January 2016 to December 2021. Five distinct deep convolutional neural networks (DCNNs) and a deep ensemble (DE) approach were trained to classify axillary lymph node involvement (ALNM), lymphovascular invasion (LVI), and histological grade (HG). The efficacy of the models was evaluated based on accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), receiver operating characteristic (ROC) curves, areas under the ROC curve (AUCs), and heat maps. DeLong test was applied to compare differences in AUC among different models. The deep ensemble approach, as the most effective model, demonstrated AUCs and accuracy of 0.869 (95% CI: 0.802-0.936) and 69.7% in LVI, 0.973 (95% CI: 0.949-0.998) and 73.8% in HG, thus providing superior classification performance in the context of imbalanced data (p < 0.05 by the DeLong test). On ALNM, AUC and accuracy were 0.780 (95% CI: 0.688-0.873) and 77.5%, which were comparable to other single models. The pretreatment US-based DE model could hold promise as a clinical guidance for predicting pathological characteristics of patients with HER2-positive breast cancer, thereby providing benefit of facilitating timely adjustments in treatment strategies.

目的是评估利用超声图像确定 HER2 阳性乳腺癌(HER2 + BC)关键预后生物标志物的可行性。本研究在2016年1月至2021年12月期间,在我院招募了512名通过病理验证确诊为HER2阳性乳腺癌的女性患者。我们训练了五个不同的深度卷积神经网络(DCNN)和一个深度集合(DE)方法来对腋窝淋巴结受累(ALNM)、淋巴管侵犯(LVI)和组织学分级(HG)进行分类。根据准确性、灵敏度、特异性、阳性预测值(PPV)、阴性预测值(NPV)、接收者操作特征曲线(ROC)、ROC 曲线下面积(AUC)和热图对模型的功效进行了评估。DeLong 检验用于比较不同模型之间 AUC 的差异。作为最有效的模型,深度集合方法在 LVI 中的 AUCs 和准确率分别为 0.869(95% CI:0.802-0.936)和 69.7%,在 HG 中分别为 0.973(95% CI:0.949-0.998)和 73.8%,因此在不平衡数据的情况下提供了更优越的分类性能(p<0.05)。
{"title":"Predicting Pathological Characteristics of HER2-Positive Breast Cancer from Ultrasound Images: a Deep Ensemble Approach.","authors":"Zhi-Hui Chen, Hai-Ling Zha, Qing Yao, Wen-Bo Zhang, Guang-Quan Zhou, Cui-Ying Li","doi":"10.1007/s10278-024-01229-0","DOIUrl":"https://doi.org/10.1007/s10278-024-01229-0","url":null,"abstract":"<p><p>The objective is to evaluate the feasibility of utilizing ultrasound images in identifying critical prognostic biomarkers for HER2-positive breast cancer (HER2 + BC). This study enrolled 512 female patients diagnosed with HER2-positive breast cancer through pathological validation at our institution from January 2016 to December 2021. Five distinct deep convolutional neural networks (DCNNs) and a deep ensemble (DE) approach were trained to classify axillary lymph node involvement (ALNM), lymphovascular invasion (LVI), and histological grade (HG). The efficacy of the models was evaluated based on accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), receiver operating characteristic (ROC) curves, areas under the ROC curve (AUCs), and heat maps. DeLong test was applied to compare differences in AUC among different models. The deep ensemble approach, as the most effective model, demonstrated AUCs and accuracy of 0.869 (95% CI: 0.802-0.936) and 69.7% in LVI, 0.973 (95% CI: 0.949-0.998) and 73.8% in HG, thus providing superior classification performance in the context of imbalanced data (p < 0.05 by the DeLong test). On ALNM, AUC and accuracy were 0.780 (95% CI: 0.688-0.873) and 77.5%, which were comparable to other single models. The pretreatment US-based DE model could hold promise as a clinical guidance for predicting pathological characteristics of patients with HER2-positive breast cancer, thereby providing benefit of facilitating timely adjustments in treatment strategies.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142074926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A 3D Convolutional Neural Network Based on Non-enhanced Brain CT to Identify Patients with Brain Metastases. 基于非增强脑CT的三维卷积神经网络识别脑转移患者
Pub Date : 2024-08-26 DOI: 10.1007/s10278-024-01240-5
Tony Felefly, Ziad Francis, Camille Roukoz, Georges Fares, Samir Achkar, Sandrine Yazbeck, Antoine Nasr, Manal Kordahi, Fares Azoury, Dolly Nehme Nasr, Elie Nasr, Georges Noël

Dedicated brain imaging for cancer patients is seldom recommended in the absence of symptoms. There is increasing availability of non-enhanced CT (NE-CT) of the brain, mainly owing to a wider utilization of Positron Emission Tomography-CT (PET-CT) in cancer staging. Brain metastases (BM) are often hard to diagnose on NE-CT. This work aims to develop a 3D Convolutional Neural Network (3D-CNN) based on brain NE-CT to distinguish patients with and without BM. We retrospectively included NE-CT scans for 100 patients with single or multiple BM and 100 patients without brain imaging abnormalities. Patients whose largest lesion was < 5 mm were excluded. The largest tumor was manually segmented on a matched contrast-enhanced T1 weighted Magnetic Resonance Imaging (MRI), and shape radiomics were extracted to determine the size and volume of the lesion. The brain was automatically segmented, and masked images were normalized and resampled. The dataset was split into training (70%) and validation (30%) sets. Multiple versions of a 3D-CNN were developed, and the best model was selected based on accuracy (ACC) on the validation set. The median largest tumor Maximum-3D-Diameter was 2.29 cm, and its median volume was 2.81 cc. Solitary BM were found in 27% of the patients, while 49% had > 5 BMs. The best model consisted of 4 convolutional layers with 3D average pooling layers, dropout layers of 50%, and a sigmoid activation function. Mean validation ACC was 0.983 (SD: 0.020) and mean area under receiver-operating characteristic curve was 0.983 (SD: 0.023). Sensitivity was 0.983 (SD: 0.020). We developed an accurate 3D-CNN based on brain NE-CT to differentiate between patients with and without BM. The model merits further external validation.

在没有症状的情况下,很少建议对癌症患者进行专门的脑部成像检查。主要由于正电子发射计算机断层扫描(PET-CT)在癌症分期中的广泛应用,脑部非增强 CT(NE-CT)的应用越来越广泛。脑转移(BM)通常很难通过 NE-CT 诊断出来。这项研究旨在开发一种基于脑 NE-CT 的三维卷积神经网络(3D-CNN),以区分有无脑转移瘤的患者。我们回顾性地纳入了 100 名单个或多个 BM 患者和 100 名无脑部成像异常患者的 NE-CT 扫描结果。最大病变为 5 个 BM 的患者。最佳模型由 4 个卷积层和 3D 平均池化层组成,剔除层为 50%,激活函数为 sigmoid。平均验证 ACC 为 0.983(标度:0.020),平均接收者工作特征曲线下面积为 0.983(标度:0.023)。灵敏度为 0.983(标准差:0.020)。我们开发了一种基于脑NE-CT的精确3D-CNN,用于区分BM患者和非BM患者。该模型值得进一步的外部验证。
{"title":"A 3D Convolutional Neural Network Based on Non-enhanced Brain CT to Identify Patients with Brain Metastases.","authors":"Tony Felefly, Ziad Francis, Camille Roukoz, Georges Fares, Samir Achkar, Sandrine Yazbeck, Antoine Nasr, Manal Kordahi, Fares Azoury, Dolly Nehme Nasr, Elie Nasr, Georges Noël","doi":"10.1007/s10278-024-01240-5","DOIUrl":"https://doi.org/10.1007/s10278-024-01240-5","url":null,"abstract":"<p><p>Dedicated brain imaging for cancer patients is seldom recommended in the absence of symptoms. There is increasing availability of non-enhanced CT (NE-CT) of the brain, mainly owing to a wider utilization of Positron Emission Tomography-CT (PET-CT) in cancer staging. Brain metastases (BM) are often hard to diagnose on NE-CT. This work aims to develop a 3D Convolutional Neural Network (3D-CNN) based on brain NE-CT to distinguish patients with and without BM. We retrospectively included NE-CT scans for 100 patients with single or multiple BM and 100 patients without brain imaging abnormalities. Patients whose largest lesion was < 5 mm were excluded. The largest tumor was manually segmented on a matched contrast-enhanced T1 weighted Magnetic Resonance Imaging (MRI), and shape radiomics were extracted to determine the size and volume of the lesion. The brain was automatically segmented, and masked images were normalized and resampled. The dataset was split into training (70%) and validation (30%) sets. Multiple versions of a 3D-CNN were developed, and the best model was selected based on accuracy (ACC) on the validation set. The median largest tumor Maximum-3D-Diameter was 2.29 cm, and its median volume was 2.81 cc. Solitary BM were found in 27% of the patients, while 49% had > 5 BMs. The best model consisted of 4 convolutional layers with 3D average pooling layers, dropout layers of 50%, and a sigmoid activation function. Mean validation ACC was 0.983 (SD: 0.020) and mean area under receiver-operating characteristic curve was 0.983 (SD: 0.023). Sensitivity was 0.983 (SD: 0.020). We developed an accurate 3D-CNN based on brain NE-CT to differentiate between patients with and without BM. The model merits further external validation.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142074924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Automated Quality Control of Skeletal Wrist Radiographs Using Deep Multitask Learning. 利用深度多任务学习改进腕骨X光片的自动质量控制。
Pub Date : 2024-08-26 DOI: 10.1007/s10278-024-01220-9
Guy Hembroff, Chad Klochko, Joseph Craig, Harikrishnan Changarnkothapeecherikkal, Richard Q Loi

Radiographic quality control is an integral component of the radiology workflow. In this study, we developed a convolutional neural network model tailored for automated quality control, specifically designed to detect and classify key attributes of wrist radiographs including projection, laterality (based on the right/left marker), and the presence of hardware and/or casts. The model's primary objective was to ensure the congruence of results with image requisition metadata to pass the quality assessment. Using a dataset of 6283 wrist radiographs from 2591 patients, our multitask-capable deep learning model based on DenseNet 121 architecture achieved high accuracy in classifying projections (F1 Score of 97.23%), detecting casts (F1 Score of 97.70%), and identifying surgical hardware (F1 Score of 92.27%). The model's performance in laterality marker detection was lower (F1 Score of 82.52%), particularly for partially visible or cut-off markers. This paper presents a comprehensive evaluation of our model's performance, highlighting its strengths, limitations, and the challenges encountered during its development and implementation. Furthermore, we outline planned future research directions aimed at refining and expanding the model's capabilities for improved clinical utility and patient care in radiographic quality control.

放射质量控制是放射学工作流程中不可或缺的组成部分。在这项研究中,我们开发了一个为自动质量控制量身定制的卷积神经网络模型,专门用于检测和分类腕部放射照片的关键属性,包括投影、侧位(基于右/左标记)以及是否存在硬件和/或石膏。该模型的主要目的是确保结果与图像申请元数据一致,以通过质量评估。我们的多任务深度学习模型基于 DenseNet 121 架构,使用来自 2591 名患者的 6283 张腕部 X 光片数据集,在投影分类(F1 得分为 97.23%)、石膏检测(F1 得分为 97.70%)和手术硬件识别(F1 得分为 92.27%)方面取得了很高的准确率。该模型在侧位标记检测方面的性能较低(F1 分数为 82.52%),尤其是对于部分可见或截断的标记。本文对我们模型的性能进行了全面评估,突出强调了其优势、局限性以及在开发和实施过程中遇到的挑战。此外,我们还概述了计划中的未来研究方向,旨在完善和扩展该模型的功能,以提高放射质量控制的临床实用性和患者护理水平。
{"title":"Improved Automated Quality Control of Skeletal Wrist Radiographs Using Deep Multitask Learning.","authors":"Guy Hembroff, Chad Klochko, Joseph Craig, Harikrishnan Changarnkothapeecherikkal, Richard Q Loi","doi":"10.1007/s10278-024-01220-9","DOIUrl":"https://doi.org/10.1007/s10278-024-01220-9","url":null,"abstract":"<p><p>Radiographic quality control is an integral component of the radiology workflow. In this study, we developed a convolutional neural network model tailored for automated quality control, specifically designed to detect and classify key attributes of wrist radiographs including projection, laterality (based on the right/left marker), and the presence of hardware and/or casts. The model's primary objective was to ensure the congruence of results with image requisition metadata to pass the quality assessment. Using a dataset of 6283 wrist radiographs from 2591 patients, our multitask-capable deep learning model based on DenseNet 121 architecture achieved high accuracy in classifying projections (F1 Score of 97.23%), detecting casts (F1 Score of 97.70%), and identifying surgical hardware (F1 Score of 92.27%). The model's performance in laterality marker detection was lower (F1 Score of 82.52%), particularly for partially visible or cut-off markers. This paper presents a comprehensive evaluation of our model's performance, highlighting its strengths, limitations, and the challenges encountered during its development and implementation. Furthermore, we outline planned future research directions aimed at refining and expanding the model's capabilities for improved clinical utility and patient care in radiographic quality control.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142074925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Fine-Tuned Large Language Model for Extracting the Progressive Bone Metastasis from Unstructured Radiology Reports. 从非结构化放射学报告中提取进行性骨转移的微调大语言模型
Pub Date : 2024-08-26 DOI: 10.1007/s10278-024-01242-3
Noriko Kanemaru, Koichiro Yasaka, Nana Fujita, Jun Kanzawa, Osamu Abe

Early detection of patients with impending bone metastasis is crucial for prognosis improvement. This study aimed to investigate the feasibility of a fine-tuned, locally run large language model (LLM) in extracting patients with bone metastasis in unstructured Japanese radiology report and to compare its performance with manual annotation. This retrospective study included patients with "metastasis" in radiological reports (April 2018-January 2019, August-May 2022, and April-December 2023 for training, validation, and test datasets of 9559, 1498, and 7399 patients, respectively). Radiologists reviewed the clinical indication and diagnosis sections of the radiological report (used as input data) and classified them into groups 0 (no bone metastasis), 1 (progressive bone metastasis), and 2 (stable or decreased bone metastasis). The data for group 0 was under-sampled in training and test datasets due to group imbalance. The best-performing model from the validation set was subsequently tested using the testing dataset. Two additional radiologists (readers 1 and 2) were involved in classifying radiological reports within the test dataset for testing purposes. The fine-tuned LLM, reader 1, and reader 2 demonstrated an accuracy of 0.979, 0.996, and 0.993, sensitivity for groups 0/1/2 of 0.988/0.947/0.943, 1.000/1.000/0.966, and 1.000/0.982/0.954, and time required for classification (s) of 105, 2312, and 3094 in under-sampled test dataset (n = 711), respectively. Fine-tuned LLM extracted patients with bone metastasis, demonstrating satisfactory performance that was comparable to or slightly lower than manual annotation by radiologists in a noticeably shorter time.

早期发现即将发生骨转移的患者对改善预后至关重要。本研究旨在探讨微调、本地运行的大型语言模型(LLM)在提取非结构化日本放射学报告中的骨转移患者方面的可行性,并比较其与人工标注的性能。这项回顾性研究纳入了放射学报告中的 "骨转移 "患者(2018 年 4 月至 2019 年 1 月、2022 年 8 月至 5 月、2023 年 4 月至 12 月,训练、验证和测试数据集分别为 9559 例、1498 例和 7399 例患者)。放射科医生审查了放射报告的临床指征和诊断部分(作为输入数据),并将其分为 0 组(无骨转移)、1 组(进展性骨转移)和 2 组(稳定或减少的骨转移)。由于组别不平衡,0 组的数据在训练和测试数据集中取样不足。随后使用测试数据集对验证集中表现最好的模型进行测试。另有两名放射科医生(读者 1 和读者 2)参与了测试数据集中放射报告的分类。经过微调的 LLM、读者 1 和读者 2 的准确度分别为 0.979、0.996 和 0.993,0/1/2 组的灵敏度分别为 0.988/0.947/0.943、1.000/1.000/0.966 和 1.000/0.982/0.954,在取样不足的测试数据集(n = 711)中,分类所需时间(秒)分别为 105、2312 和 3094。经过微调的 LLM 提取出了骨转移患者,其性能令人满意,与放射科医生的人工标注相当或略低于人工标注,而且时间明显更短。
{"title":"The Fine-Tuned Large Language Model for Extracting the Progressive Bone Metastasis from Unstructured Radiology Reports.","authors":"Noriko Kanemaru, Koichiro Yasaka, Nana Fujita, Jun Kanzawa, Osamu Abe","doi":"10.1007/s10278-024-01242-3","DOIUrl":"https://doi.org/10.1007/s10278-024-01242-3","url":null,"abstract":"<p><p>Early detection of patients with impending bone metastasis is crucial for prognosis improvement. This study aimed to investigate the feasibility of a fine-tuned, locally run large language model (LLM) in extracting patients with bone metastasis in unstructured Japanese radiology report and to compare its performance with manual annotation. This retrospective study included patients with \"metastasis\" in radiological reports (April 2018-January 2019, August-May 2022, and April-December 2023 for training, validation, and test datasets of 9559, 1498, and 7399 patients, respectively). Radiologists reviewed the clinical indication and diagnosis sections of the radiological report (used as input data) and classified them into groups 0 (no bone metastasis), 1 (progressive bone metastasis), and 2 (stable or decreased bone metastasis). The data for group 0 was under-sampled in training and test datasets due to group imbalance. The best-performing model from the validation set was subsequently tested using the testing dataset. Two additional radiologists (readers 1 and 2) were involved in classifying radiological reports within the test dataset for testing purposes. The fine-tuned LLM, reader 1, and reader 2 demonstrated an accuracy of 0.979, 0.996, and 0.993, sensitivity for groups 0/1/2 of 0.988/0.947/0.943, 1.000/1.000/0.966, and 1.000/0.982/0.954, and time required for classification (s) of 105, 2312, and 3094 in under-sampled test dataset (n = 711), respectively. Fine-tuned LLM extracted patients with bone metastasis, demonstrating satisfactory performance that was comparable to or slightly lower than manual annotation by radiologists in a noticeably shorter time.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142074927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementing a Photodocumentation Program. 实施摄影记录计划。
Pub Date : 2024-08-22 DOI: 10.1007/s10278-024-01236-1
Eric K Lai, Evan Slavik, Bessie Ganim, Laurie A Perry, Caitlin Treuting, Troy Dee, Melissa Osborne, Cieara Presley, Alexander J Towbin

The widespread availability of smart devices has facilitated the use of medical photography, yet photodocumentation workflows are seldom implemented in healthcare organizations due to integration challenges with electronic health records (EHR) and standard clinical workflows. This manuscript details the implementation of a comprehensive photodocumentation workflow across all phases of care at a large healthcare organization, emphasizing efficiency and patient safety. From November 2018 to December 2023, healthcare workers at our institution uploaded nearly 32,000 photodocuments spanning 54 medical specialties. The photodocumentation process requires as few as 11 mouse clicks and keystrokes within the EHR and on smart devices. Automation played a crucial role in driving workflow efficiency and patient safety. For example, body part rules were used to automate the application of a sensitive label to photos of the face, chest, external genitalia, and buttocks. This automation was successful, with over 50% of the uploaded photodocuments being labeled as sensitive. Our implementation highlights the potential for standardizing photodocumentation workflows, thereby enhancing clinical documentation, improving patient care, and ensuring the secure handling of sensitive images.

智能设备的普及为医疗摄影的使用提供了便利,但由于与电子健康记录(EHR)和标准临床工作流程的整合存在挑战,医疗机构很少实施摄影记录工作流程。本手稿详细介绍了一家大型医疗机构在所有护理阶段实施全面摄影记录工作流程的情况,强调效率和患者安全。从 2018 年 11 月到 2023 年 12 月,我们机构的医护人员上传了近 32,000 份照片文档,涵盖 54 个医疗专科。在电子病历和智能设备上,拍照存档过程只需点击 11 次鼠标和按键。自动化在提高工作流程效率和患者安全方面发挥了至关重要的作用。例如,使用身体部位规则自动对面部、胸部、外生殖器和臀部的照片贴上敏感标签。这种自动化非常成功,50% 以上的上传照片文件都被贴上了敏感标签。我们的实施凸显了照片文档工作流程标准化的潜力,从而增强了临床文档的质量,改善了病人护理,并确保了敏感图像的安全处理。
{"title":"Implementing a Photodocumentation Program.","authors":"Eric K Lai, Evan Slavik, Bessie Ganim, Laurie A Perry, Caitlin Treuting, Troy Dee, Melissa Osborne, Cieara Presley, Alexander J Towbin","doi":"10.1007/s10278-024-01236-1","DOIUrl":"https://doi.org/10.1007/s10278-024-01236-1","url":null,"abstract":"<p><p>The widespread availability of smart devices has facilitated the use of medical photography, yet photodocumentation workflows are seldom implemented in healthcare organizations due to integration challenges with electronic health records (EHR) and standard clinical workflows. This manuscript details the implementation of a comprehensive photodocumentation workflow across all phases of care at a large healthcare organization, emphasizing efficiency and patient safety. From November 2018 to December 2023, healthcare workers at our institution uploaded nearly 32,000 photodocuments spanning 54 medical specialties. The photodocumentation process requires as few as 11 mouse clicks and keystrokes within the EHR and on smart devices. Automation played a crucial role in driving workflow efficiency and patient safety. For example, body part rules were used to automate the application of a sensitive label to photos of the face, chest, external genitalia, and buttocks. This automation was successful, with over 50% of the uploaded photodocuments being labeled as sensitive. Our implementation highlights the potential for standardizing photodocumentation workflows, thereby enhancing clinical documentation, improving patient care, and ensuring the secure handling of sensitive images.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142038686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of imaging informatics in medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1