首页 > 最新文献

Journal of Digital Imaging最新文献

英文 中文
Super-resolution Deep Learning Reconstruction Cervical Spine 1.5T MRI: Improved Interobserver Agreement in Evaluations of Neuroforaminal Stenosis Compared to Conventional Deep Learning Reconstruction 超分辨率深度学习重建颈椎 1.5T MRI:与传统深度学习重建相比,神经孔狭窄评估的观察者间一致性得到改善
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-26 DOI: 10.1007/s10278-024-01112-y
Koichiro Yasaka, Shunichi Uehara, Shimpei Kato, Yusuke Watanabe, Taku Tajima, Hiroyuki Akai, Naoki Yoshioka, Masaaki Akahane, Kuni Ohtomo, Osamu Abe, Shigeru Kiryu

The aim of this study was to investigate whether super-resolution deep learning reconstruction (SR-DLR) is superior to conventional deep learning reconstruction (DLR) with respect to interobserver agreement in the evaluation of neuroforaminal stenosis using 1.5T cervical spine MRI. This retrospective study included 39 patients who underwent 1.5T cervical spine MRI. T2-weighted sagittal images were reconstructed with SR-DLR and DLR. Three blinded radiologists independently evaluated the images in terms of the degree of neuroforaminal stenosis, depictions of the vertebrae, spinal cord and neural foramina, sharpness, noise, artefacts and diagnostic acceptability. In quantitative image analyses, a fourth radiologist evaluated the signal-to-noise ratio (SNR) by placing a circular or ovoid region of interest on the spinal cord, and the edge slope based on a linear region of interest placed across the surface of the spinal cord. Interobserver agreement in the evaluations of neuroforaminal stenosis using SR-DLR and DLR was 0.422–0.571 and 0.410–0.542, respectively. The kappa values between reader 1 vs. reader 2 and reader 2 vs. reader 3 significantly differed. Two of the three readers rated depictions of the spinal cord, sharpness, and diagnostic acceptability as significantly better with SR-DLR than with DLR. Both SNR and edge slope (/mm) were also significantly better with SR-DLR (12.9 and 6031, respectively) than with DLR (11.5 and 3741, respectively) (p < 0.001 for both). In conclusion, compared to DLR, SR-DLR improved interobserver agreement in the evaluations of neuroforaminal stenosis using 1.5T cervical spine MRI.

本研究旨在探讨在使用 1.5T 颈椎磁共振成像评估神经孔狭窄时,超分辨率深度学习重建(SR-DLR)与传统深度学习重建(DLR)的观察者间一致性是否更优。这项回顾性研究纳入了 39 名接受 1.5T 颈椎磁共振成像检查的患者。T2加权矢状面图像采用SR-DLR和DLR重建。三位双盲放射科医生从神经孔狭窄程度、椎体、脊髓和神经孔的描绘、清晰度、噪声、伪影和诊断可接受性等方面对图像进行了独立评估。在定量图像分析中,第四位放射科医生通过在脊髓上放置一个圆形或卵形感兴趣区来评估信噪比(SNR),并通过在脊髓表面放置一个线性感兴趣区来评估边缘斜率。使用 SR-DLR 和 DLR 评估神经孔狭窄的观察者间一致性分别为 0.422-0.571 和 0.410-0.542。读者 1 与读者 2 之间以及读者 2 与读者 3 之间的卡帕值存在显著差异。在三位读者中,有两位读者认为 SR-DLR 对脊髓的描绘、清晰度和诊断可接受性明显优于 DLR。SR-DLR 的信噪比和边缘斜率(/mm)也明显优于 DLR(分别为 12.9 和 6031)(两者的 p 均为 0.001)。总之,与 DLR 相比,SR-DLR 提高了使用 1.5T 颈椎 MRI 评估神经孔狭窄的观察者之间的一致性。
{"title":"Super-resolution Deep Learning Reconstruction Cervical Spine 1.5T MRI: Improved Interobserver Agreement in Evaluations of Neuroforaminal Stenosis Compared to Conventional Deep Learning Reconstruction","authors":"Koichiro Yasaka, Shunichi Uehara, Shimpei Kato, Yusuke Watanabe, Taku Tajima, Hiroyuki Akai, Naoki Yoshioka, Masaaki Akahane, Kuni Ohtomo, Osamu Abe, Shigeru Kiryu","doi":"10.1007/s10278-024-01112-y","DOIUrl":"https://doi.org/10.1007/s10278-024-01112-y","url":null,"abstract":"<p>The aim of this study was to investigate whether super-resolution deep learning reconstruction (SR-DLR) is superior to conventional deep learning reconstruction (DLR) with respect to interobserver agreement in the evaluation of neuroforaminal stenosis using 1.5T cervical spine MRI. This retrospective study included 39 patients who underwent 1.5T cervical spine MRI. T2-weighted sagittal images were reconstructed with SR-DLR and DLR. Three blinded radiologists independently evaluated the images in terms of the degree of neuroforaminal stenosis, depictions of the vertebrae, spinal cord and neural foramina, sharpness, noise, artefacts and diagnostic acceptability. In quantitative image analyses, a fourth radiologist evaluated the signal-to-noise ratio (SNR) by placing a circular or ovoid region of interest on the spinal cord, and the edge slope based on a linear region of interest placed across the surface of the spinal cord. Interobserver agreement in the evaluations of neuroforaminal stenosis using SR-DLR and DLR was 0.422–0.571 and 0.410–0.542, respectively. The kappa values between reader 1 vs. reader 2 and reader 2 vs. reader 3 significantly differed. Two of the three readers rated depictions of the spinal cord, sharpness, and diagnostic acceptability as significantly better with SR-DLR than with DLR. Both SNR and edge slope (/mm) were also significantly better with SR-DLR (12.9 and 6031, respectively) than with DLR (11.5 and 3741, respectively) (<i>p</i> &lt; 0.001 for both). In conclusion, compared to DLR, SR-DLR improved interobserver agreement in the evaluations of neuroforaminal stenosis using 1.5T cervical spine MRI.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"09 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140799447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodality Fusion Strategies in Eye Disease Diagnosis 眼疾诊断中的多模态融合策略
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-19 DOI: 10.1007/s10278-024-01105-x
Sara El-Ateif, Ali Idri

Multimodality fusion has gained significance in medical applications, particularly in diagnosing challenging diseases like eye diseases, notably diabetic eye diseases that pose risks of vision loss and blindness. Mono-modality eye disease diagnosis proves difficult, often missing crucial disease indicators. In response, researchers advocate multimodality-based approaches to enhance diagnostics. This study is a unique exploration, evaluating three multimodality fusion strategies—early, joint, and late—in conjunction with state-of-the-art convolutional neural network models for automated eye disease binary detection across three datasets: fundus fluorescein angiography, macula, and combination of digital retinal images for vessel extraction, structured analysis of the retina, and high-resolution fundus. Findings reveal the efficacy of each fusion strategy: type 0 early fusion with DenseNet121 achieves an impressive 99.45% average accuracy. InceptionResNetV2 emerges as the top-performing joint fusion architecture with an average accuracy of 99.58%. Late fusion ResNet50V2 achieves a perfect score of 100% across all metrics, surpassing both early and joint fusion. Comparative analysis demonstrates that late fusion ResNet50V2 matches the accuracy of state-of-the-art feature-level fusion model for multiview learning. In conclusion, this study substantiates late fusion as the optimal strategy for eye disease diagnosis compared to early and joint fusion, showcasing its superiority in leveraging multimodal information.

多模态融合技术在医疗应用中的重要性日益凸显,尤其是在诊断眼部疾病等具有挑战性的疾病方面,特别是有视力丧失和失明风险的糖尿病眼病。单模态眼病诊断困难重重,往往会遗漏关键的疾病指标。为此,研究人员提倡采用基于多模态的方法来加强诊断。这项研究是一次独特的探索,它评估了三种多模态融合策略--早期、联合和晚期,并结合最先进的卷积神经网络模型,在三个数据集上进行眼病二元自动检测:眼底荧光素血管造影、黄斑、用于血管提取的数字视网膜图像组合、视网膜结构分析和高分辨率眼底。研究结果揭示了每种融合策略的功效:与 DenseNet121 的 0 型早期融合达到了令人印象深刻的 99.45% 的平均准确率。InceptionResNetV2 以 99.58% 的平均准确率成为表现最佳的联合融合架构。后期融合 ResNet50V2 在所有指标上都获得了 100% 的满分,超过了早期融合和联合融合。对比分析表明,后期融合 ResNet50V2 的准确率与最先进的多视角学习特征级融合模型不相上下。总之,与早期融合和联合融合相比,本研究证实了后期融合是眼科疾病诊断的最佳策略,展示了其在利用多模态信息方面的优越性。
{"title":"Multimodality Fusion Strategies in Eye Disease Diagnosis","authors":"Sara El-Ateif, Ali Idri","doi":"10.1007/s10278-024-01105-x","DOIUrl":"https://doi.org/10.1007/s10278-024-01105-x","url":null,"abstract":"<p>Multimodality fusion has gained significance in medical applications, particularly in diagnosing challenging diseases like eye diseases, notably diabetic eye diseases that pose risks of vision loss and blindness. Mono-modality eye disease diagnosis proves difficult, often missing crucial disease indicators. In response, researchers advocate multimodality-based approaches to enhance diagnostics. This study is a unique exploration, evaluating three multimodality fusion strategies—early, joint, and late—in conjunction with state-of-the-art convolutional neural network models for automated eye disease binary detection across three datasets: fundus fluorescein angiography, macula, and combination of digital retinal images for vessel extraction, structured analysis of the retina, and high-resolution fundus. Findings reveal the efficacy of each fusion strategy: type 0 early fusion with DenseNet121 achieves an impressive 99.45% average accuracy. InceptionResNetV2 emerges as the top-performing joint fusion architecture with an average accuracy of 99.58%. Late fusion ResNet50V2 achieves a perfect score of 100% across all metrics, surpassing both early and joint fusion. Comparative analysis demonstrates that late fusion ResNet50V2 matches the accuracy of state-of-the-art feature-level fusion model for multiview learning. In conclusion, this study substantiates late fusion as the optimal strategy for eye disease diagnosis compared to early and joint fusion, showcasing its superiority in leveraging multimodal information.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"206 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140629560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Left Ventricular Segmentation, Warping, and Myocardial Registration for Automated Strain Measurement 用于自动应变测量的左心室分割、翘曲和心肌登记
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-19 DOI: 10.1007/s10278-024-01119-5
Kuan-Chih Huang, Donna Shu-Han Lin, Geng-Shi Jeng, Ting-Tse Lin, Lian-Yu Lin, Chih-Kuo Lee, Lung-Chun Lin

The left ventricular global longitudinal strain (LVGLS) is a crucial prognostic indicator. However, inconsistencies in measurements due to the speckle tracking algorithm and manual adjustments have hindered its standardization and democratization. To solve this issue, we proposed a fully automated strain measurement by artificial intelligence-assisted LV segmentation contours. The LV segmentation model was trained from echocardiograms of 368 adults (11,125 frames). We compared the registration-like effects of dynamic time warping (DTW) with speckle tracking on a synthetic echocardiographic dataset in experiment-1. In experiment-2, we enrolled 80 patients to compare the DTW method with commercially available software. In experiment-3, we combined the segmentation model and DTW method to create the artificial intelligence (AI)-DTW method, which was then tested on 40 patients with general LV morphology, 20 with dilated cardiomyopathy (DCMP), and 20 with transthyretin-associated cardiac amyloidosis (ATTR-CA), 20 with severe aortic stenosis (AS), and 20 with severe mitral regurgitation (MR). Experiments-1 and -2 revealed that the DTW method is consistent with dedicated software. In experiment-3, the AI-DTW strain method showed comparable results for general LV morphology (bias − 0.137 ± 0.398%), DCMP (− 0.397 ± 0.607%), ATTR-CA (0.095 ± 0.581%), AS (0.334 ± 0.358%), and MR (0.237 ± 0.490%). Moreover, the strain curves showed a high correlation in their characteristics, with R-squared values of 0.8879–0.9452 for those LV morphology in experiment-3. Measuring LVGLS through dynamic warping of segmentation contour is a feasible method compared to traditional tracking techniques. This approach has the potential to decrease the need for manual demarcation and make LVGLS measurements more efficient and user-friendly for daily practice.

左心室整体纵向应变(LVGLS)是一项重要的预后指标。然而,由于斑点追踪算法和人工调整导致测量结果不一致,阻碍了其标准化和民主化。为解决这一问题,我们提出了一种通过人工智能辅助左心室分割轮廓进行全自动应变测量的方法。左心室分割模型是根据 368 名成人的超声心动图(11125 帧)训练出来的。在实验-1中,我们比较了动态时间扭曲(DTW)与斑点追踪在合成超声心动图数据集上的类似配准效果。在实验-2 中,我们招募了 80 名患者,将 DTW 方法与市售软件进行比较。在实验-3中,我们将分割模型和DTW方法结合起来,创建了人工智能(AI)-DTW方法,然后在40名左心室形态一般的患者、20名扩张型心肌病(DCMP)患者、20名经淀粉样蛋白相关性心脏淀粉样变性(ATTR-CA)患者、20名严重主动脉瓣狭窄(AS)患者和20名严重二尖瓣反流(MR)患者身上进行了测试。实验-1 和实验-2 表明,DTW 方法与专用软件一致。在实验-3 中,AI-DTW 应变法对一般左心室形态(偏差 - 0.137 ± 0.398%)、DCMP(- 0.397 ± 0.607%)、ATTR-CA(0.095 ± 0.581%)、AS(0.334 ± 0.358%)和 MR(0.237 ± 0.490%)显示出相似的结果。此外,应变曲线的特征显示出很高的相关性,实验-3 中左心室形态的 R 平方值为 0.8879-0.9452。与传统的跟踪技术相比,通过动态扭曲分割轮廓来测量 LVGLS 是一种可行的方法。这种方法有可能减少人工分界的需要,使 LVGLS 测量更高效、更便于日常操作。
{"title":"Left Ventricular Segmentation, Warping, and Myocardial Registration for Automated Strain Measurement","authors":"Kuan-Chih Huang, Donna Shu-Han Lin, Geng-Shi Jeng, Ting-Tse Lin, Lian-Yu Lin, Chih-Kuo Lee, Lung-Chun Lin","doi":"10.1007/s10278-024-01119-5","DOIUrl":"https://doi.org/10.1007/s10278-024-01119-5","url":null,"abstract":"<p>The left ventricular global longitudinal strain (LVGLS) is a crucial prognostic indicator. However, inconsistencies in measurements due to the speckle tracking algorithm and manual adjustments have hindered its standardization and democratization. To solve this issue, we proposed a fully automated strain measurement by artificial intelligence-assisted LV segmentation contours. The LV segmentation model was trained from echocardiograms of 368 adults (11,125 frames). We compared the registration-like effects of dynamic time warping (DTW) with speckle tracking on a synthetic echocardiographic dataset in experiment-1. In experiment-2, we enrolled 80 patients to compare the DTW method with commercially available software. In experiment-3, we combined the segmentation model and DTW method to create the artificial intelligence (AI)-DTW method, which was then tested on 40 patients with general LV morphology, 20 with dilated cardiomyopathy (DCMP), and 20 with transthyretin-associated cardiac amyloidosis (ATTR-CA), 20 with severe aortic stenosis (AS), and 20 with severe mitral regurgitation (MR). Experiments-1 and -2 revealed that the DTW method is consistent with dedicated software. In experiment-3, the AI-DTW strain method showed comparable results for general LV morphology (bias − 0.137 ± 0.398%), DCMP (− 0.397 ± 0.607%), ATTR-CA (0.095 ± 0.581%), AS (0.334 ± 0.358%), and MR (0.237 ± 0.490%). Moreover, the strain curves showed a high correlation in their characteristics, with <i>R</i>-squared values of 0.8879–0.9452 for those LV morphology in experiment-3. Measuring LVGLS through dynamic warping of segmentation contour is a feasible method compared to traditional tracking techniques. This approach has the potential to decrease the need for manual demarcation and make LVGLS measurements more efficient and user-friendly for daily practice.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"116 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140623968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Optimal Synthetic Inversion Recovery Image Selection (RT-OSIRIS) for Deep Brain Stimulation Targeting 用于脑深部刺激定位的实时最佳合成反转恢复图像选择(RT-OSIRIS)
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-19 DOI: 10.1007/s10278-024-01117-7
Vishal Patel, Shengzhen Tao, Xiangzhi Zhou, Chen Lin, Erin Westerhold, Sanjeet Grewal, Erik H. Middlebrooks

Deep brain stimulation (DBS) is a method of electrical neuromodulation used to treat a variety of neuropsychiatric conditions including essential tremor, Parkinson’s disease, epilepsy, and obsessive–compulsive disorder. The procedure requires precise placement of electrodes such that the electrical contacts lie within or in close proximity to specific target nuclei and tracts located deep within the brain. DBS electrode trajectory planning has become increasingly dependent on direct targeting with the need for precise visualization of targets. MRI is the primary tool for direct visualization, and this has led to the development of numerous sequences to aid in visualization of different targets. Synthetic inversion recovery images, specified by an inversion time parameter, can be generated from T1 relaxation maps, and this represents a promising method for modifying the contrast of deep brain structures to accentuate target areas using a single acquisition. However, there is currently no accessible method for dynamically adjusting the inversion time parameter and observing the effects in real-time in order to choose the optimal value. In this work, we examine three different approaches to implementing an application for real-time optimal synthetic inversion recovery image selection and evaluate them based on their ability to display continually-updated synthetic inversion recovery images as the user modifies the inversion time parameter. These methods include continuously computing the inversion recovery equation at each voxel in the image volume, limiting the computation only to the voxels of the orthogonal slices currently displayed on screen, or using a series of lookup tables with precomputed solutions to the inversion recovery equation. We find the latter implementation provides for the quickest display updates both when modifying the inversion time and when scrolling through the image. We introduce a publicly available cross-platform application built around this conclusion. We also briefly discuss other details of the implementations and considerations for extensions to other use cases.

脑深部刺激(DBS)是一种神经电调控方法,用于治疗各种神经精神疾病,包括器质性震颤、帕金森病、癫痫和强迫症。该疗法要求精确放置电极,使电触点位于或接近位于大脑深部的特定靶核和靶束。DBS 电极轨迹规划越来越依赖于直接定位,同时需要精确观察目标。核磁共振成像(MRI)是直接可视化的主要工具,因此开发了许多序列来帮助可视化不同的目标。由反转时间参数指定的合成反转恢复图像可从 T1 弛豫图生成,这是一种很有前途的方法,可通过一次采集改变大脑深部结构的对比度,以突出目标区域。然而,目前还没有可用的方法来动态调整反演时间参数并实时观察其效果,以选择最佳值。在这项工作中,我们研究了实现实时最佳合成反演恢复图像选择应用的三种不同方法,并根据它们在用户修改反演时间参数时显示持续更新的合成反演恢复图像的能力对其进行了评估。这些方法包括在图像体积的每个体素上连续计算反转恢复方程,将计算限制在当前屏幕显示的正交切片的体素上,或使用一系列带有反转恢复方程预计算解的查找表。我们发现后一种实现方式在修改反演时间和滚动图像时都能提供最快速的显示更新。我们围绕这一结论介绍了一个公开的跨平台应用程序。我们还简要讨论了实现的其他细节以及扩展到其他用例的注意事项。
{"title":"Real-Time Optimal Synthetic Inversion Recovery Image Selection (RT-OSIRIS) for Deep Brain Stimulation Targeting","authors":"Vishal Patel, Shengzhen Tao, Xiangzhi Zhou, Chen Lin, Erin Westerhold, Sanjeet Grewal, Erik H. Middlebrooks","doi":"10.1007/s10278-024-01117-7","DOIUrl":"https://doi.org/10.1007/s10278-024-01117-7","url":null,"abstract":"<p>Deep brain stimulation (DBS) is a method of electrical neuromodulation used to treat a variety of neuropsychiatric conditions including essential tremor, Parkinson’s disease, epilepsy, and obsessive–compulsive disorder. The procedure requires precise placement of electrodes such that the electrical contacts lie within or in close proximity to specific target nuclei and tracts located deep within the brain. DBS electrode trajectory planning has become increasingly dependent on direct targeting with the need for precise visualization of targets. MRI is the primary tool for direct visualization, and this has led to the development of numerous sequences to aid in visualization of different targets. Synthetic inversion recovery images, specified by an inversion time parameter, can be generated from T<sub>1</sub> relaxation maps, and this represents a promising method for modifying the contrast of deep brain structures to accentuate target areas using a single acquisition. However, there is currently no accessible method for dynamically adjusting the inversion time parameter and observing the effects in real-time in order to choose the optimal value. In this work, we examine three different approaches to implementing an application for real-time optimal synthetic inversion recovery image selection and evaluate them based on their ability to display continually-updated synthetic inversion recovery images as the user modifies the inversion time parameter. These methods include continuously computing the inversion recovery equation at each voxel in the image volume, limiting the computation only to the voxels of the orthogonal slices currently displayed on screen, or using a series of lookup tables with precomputed solutions to the inversion recovery equation. We find the latter implementation provides for the quickest display updates both when modifying the inversion time and when scrolling through the image. We introduce a publicly available cross-platform application built around this conclusion. We also briefly discuss other details of the implementations and considerations for extensions to other use cases.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"4 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140623964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Classification of Lumbar Spondylolisthesis X-Ray Images Using Convolutional Neural Networks 利用卷积神经网络对腰椎骨质增生 X 射线图像进行分类
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-18 DOI: 10.1007/s10278-024-01115-9
Wutong Chen, Du Junsheng, Yanzhen Chen, Yifeng Fan, Hengzhi Liu, Chang Tan, Xuanming Shao, Xinzhi Li

We aimed to develop and validate a deep convolutional neural network (DCNN) model capable of accurately identifying spondylolysis or spondylolisthesis on lateral or dynamic X-ray images. A total of 2449 lumbar lateral and dynamic X-ray images were collected from two tertiary hospitals. These images were categorized into lumbar spondylolysis (LS), degenerative lumbar spondylolisthesis (DLS), and normal lumbar in a proportional manner. Subsequently, the images were randomly divided into training, validation, and test sets to establish a classification recognition network. The model training and validation process utilized the EfficientNetV2-M network. The model’s ability to generalize was assessed by conducting a rigorous evaluation on an entirely independent test set and comparing its performance with the diagnoses made by three orthopedists and three radiologists. The evaluation metrics employed to assess the model’s performance included accuracy, sensitivity, specificity, and F1 score. Additionally, the weight distribution of the network was visualized using gradient-weighted class activation mapping (Grad-CAM). For the doctor group, accuracy ranged from 87.9 to 90.0% (mean, 89.0%), precision ranged from 87.2 to 90.5% (mean, 89.0%), sensitivity ranged from 87.1 to 91.0% (mean, 89.2%), specificity ranged from 93.7 to 94.7% (mean, 94.3%), and F1 score ranged from 88.2 to 89.9% (mean, 89.1%). The DCNN model had accuracy of 92.0%, precision of 91.9%, sensitivity of 92.2%, specificity of 95.7%, and F1 score of 92.0%. Grad-CAM exhibited concentrations of highlighted areas in the intervertebral foraminal region. We developed a DCNN model that intelligently distinguished spondylolysis or spondylolisthesis on lumbar lateral or lumbar dynamic radiographs.

我们旨在开发并验证一种深度卷积神经网络(DCNN)模型,该模型能够在侧位或动态 X 光图像上准确识别脊柱溶解症或脊柱滑脱症。我们从两家三甲医院共收集了 2449 张腰椎侧位和动态 X 光图像。这些图像按比例分为腰椎溶解症(LS)、退行性腰椎滑脱症(DLS)和正常腰椎。随后,将图像随机分为训练集、验证集和测试集,以建立分类识别网络。模型的训练和验证过程使用了 EfficientNetV2-M 网络。通过在完全独立的测试集上进行严格评估,并将其性能与三位骨科医生和三位放射科医生的诊断结果进行比较,评估了该模型的通用能力。用于评估模型性能的评价指标包括准确性、灵敏度、特异性和 F1 分数。此外,还使用梯度加权类激活图谱(Grad-CAM)对网络的权重分布进行了可视化。医生组的准确率为 87.9% 至 90.0%(平均值为 89.0%),精确度为 87.2% 至 90.5%(平均值为 89.0%),灵敏度为 87.1% 至 91.0%(平均值为 89.2%),特异性为 93.7% 至 94.7%(平均值为 94.3%),F1 分数为 88.2% 至 89.9%(平均值为 89.1%)。DCNN 模型的准确度为 92.0%,精确度为 91.9%,灵敏度为 92.2%,特异性为 95.7%,F1 分数为 92.0%。Grad-CAM 显示了椎间孔区域的高亮区域集中。我们开发了一种 DCNN 模型,可智能区分腰椎侧位片或腰椎动态X光片上的脊柱溶解症或脊柱滑脱症。
{"title":"The Classification of Lumbar Spondylolisthesis X-Ray Images Using Convolutional Neural Networks","authors":"Wutong Chen, Du Junsheng, Yanzhen Chen, Yifeng Fan, Hengzhi Liu, Chang Tan, Xuanming Shao, Xinzhi Li","doi":"10.1007/s10278-024-01115-9","DOIUrl":"https://doi.org/10.1007/s10278-024-01115-9","url":null,"abstract":"<p>We aimed to develop and validate a deep convolutional neural network (DCNN) model capable of accurately identifying spondylolysis or spondylolisthesis on lateral or dynamic X-ray images. A total of 2449 lumbar lateral and dynamic X-ray images were collected from two tertiary hospitals. These images were categorized into lumbar spondylolysis (LS), degenerative lumbar spondylolisthesis (DLS), and normal lumbar in a proportional manner. Subsequently, the images were randomly divided into training, validation, and test sets to establish a classification recognition network. The model training and validation process utilized the EfficientNetV2-M network. The model’s ability to generalize was assessed by conducting a rigorous evaluation on an entirely independent test set and comparing its performance with the diagnoses made by three orthopedists and three radiologists. The evaluation metrics employed to assess the model’s performance included accuracy, sensitivity, specificity, and <i>F</i>1 score. Additionally, the weight distribution of the network was visualized using gradient-weighted class activation mapping (Grad-CAM). For the doctor group, accuracy ranged from 87.9 to 90.0% (mean, 89.0%), precision ranged from 87.2 to 90.5% (mean, 89.0%), sensitivity ranged from 87.1 to 91.0% (mean, 89.2%), specificity ranged from 93.7 to 94.7% (mean, 94.3%), and <i>F</i>1 score ranged from 88.2 to 89.9% (mean, 89.1%). The DCNN model had accuracy of 92.0%, precision of 91.9%, sensitivity of 92.2%, specificity of 95.7%, and <i>F</i>1 score of 92.0%. Grad-CAM exhibited concentrations of highlighted areas in the intervertebral foraminal region. We developed a DCNN model that intelligently distinguished spondylolysis or spondylolisthesis on lumbar lateral or lumbar dynamic radiographs.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"50 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140623965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synthetic Low-Energy Monochromatic Image Generation in Single-Energy Computed Tomography System Using a Transformer-Based Deep Learning Model 使用基于变压器的深度学习模型在单能量计算机断层扫描系统中生成合成低能量单色图像
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-18 DOI: 10.1007/s10278-024-01111-z
Yuhei Koike, Shingo Ohira, Sayaka Kihara, Yusuke Anetai, Hideki Takegawa, Satoaki Nakamura, Masayoshi Miyazaki, Koji Konishi, Noboru Tanigawa

While dual-energy computed tomography (DECT) technology introduces energy-specific information in clinical practice, single-energy CT (SECT) is predominantly used, limiting the number of people who can benefit from DECT. This study proposed a novel method to generate synthetic low-energy virtual monochromatic images at 50 keV (sVMI50keV) from SECT images using a transformer-based deep learning model, SwinUNETR. Data were obtained from 85 patients who underwent head and neck radiotherapy. Among these, the model was built using data from 70 patients for whom only DECT images were available. The remaining 15 patients, for whom both DECT and SECT images were available, were used to predict from the actual SECT images. We used the SwinUNETR model to generate sVMI50keV. The image quality was evaluated, and the results were compared with those of the convolutional neural network-based model, Unet. The mean absolute errors from the true VMI50keV were 36.5 ± 4.9 and 33.0 ± 4.4 Hounsfield units for Unet and SwinUNETR, respectively. SwinUNETR yielded smaller errors in tissue attenuation values compared with those of Unet. The contrast changes in sVMI50keV generated by SwinUNETR from SECT were closer to those of DECT-derived VMI50keV than the contrast changes in Unet-generated sVMI50keV. This study demonstrated the potential of transformer-based models for generating synthetic low-energy VMIs from SECT images, thereby improving the image quality of head and neck cancer imaging. It provides a practical and feasible solution to obtain low-energy VMIs from SECT data that can benefit a large number of facilities and patients without access to DECT technology.

虽然双能量计算机断层扫描(DECT)技术在临床实践中引入了能量特异性信息,但单能量 CT(SECT)却被广泛使用,从而限制了从 DECT 中获益的人数。本研究提出了一种新方法,利用基于变压器的深度学习模型 SwinUNETR,从 SECT 图像生成合成的 50 keV 低能量虚拟单色图像(sVMI50keV)。数据来自 85 名接受头颈部放疗的患者。其中,该模型是利用 70 名患者的数据建立的,这些患者只有 DECT 图像可用。其余 15 名患者的 DECT 和 SECT 图像均可用,我们使用实际 SECT 图像进行预测。我们使用 SwinUNETR 模型生成 sVMI50keV。我们对图像质量进行了评估,并将结果与基于卷积神经网络的 Unet 模型进行了比较。Unet 和 SwinUNETR 与真实 VMI50keV 的平均绝对误差分别为 36.5 ± 4.9 和 33.0 ± 4.4 Hounsfield 单位。与 Unet 相比,SwinUNETR 得出的组织衰减值误差更小。与 Unet 生成的 sVMI50keV 的对比度变化相比,SwinUNETR 从 SECT 生成的 sVMI50keV 的对比度变化更接近 DECT 导出的 VMI50keV。这项研究证明了基于变压器的模型在从 SECT 图像生成合成低能量 VMI 方面的潜力,从而提高了头颈部癌症成像的图像质量。它为从 SECT 数据中获取低能量 VMI 提供了一个切实可行的解决方案,可使大量无法使用 DECT 技术的机构和患者受益。
{"title":"Synthetic Low-Energy Monochromatic Image Generation in Single-Energy Computed Tomography System Using a Transformer-Based Deep Learning Model","authors":"Yuhei Koike, Shingo Ohira, Sayaka Kihara, Yusuke Anetai, Hideki Takegawa, Satoaki Nakamura, Masayoshi Miyazaki, Koji Konishi, Noboru Tanigawa","doi":"10.1007/s10278-024-01111-z","DOIUrl":"https://doi.org/10.1007/s10278-024-01111-z","url":null,"abstract":"<p>While dual-energy computed tomography (DECT) technology introduces energy-specific information in clinical practice, single-energy CT (SECT) is predominantly used, limiting the number of people who can benefit from DECT. This study proposed a novel method to generate synthetic low-energy virtual monochromatic images at 50 keV (sVMI<sub>50keV</sub>) from SECT images using a transformer-based deep learning model, SwinUNETR. Data were obtained from 85 patients who underwent head and neck radiotherapy. Among these, the model was built using data from 70 patients for whom only DECT images were available. The remaining 15 patients, for whom both DECT and SECT images were available, were used to predict from the actual SECT images. We used the SwinUNETR model to generate sVMI<sub>50keV</sub>. The image quality was evaluated, and the results were compared with those of the convolutional neural network-based model, Unet. The mean absolute errors from the true VMI<sub>50keV</sub> were 36.5 ± 4.9 and 33.0 ± 4.4 Hounsfield units for Unet and SwinUNETR, respectively. SwinUNETR yielded smaller errors in tissue attenuation values compared with those of Unet. The contrast changes in sVMI<sub>50keV</sub> generated by SwinUNETR from SECT were closer to those of DECT-derived VMI<sub>50keV</sub> than the contrast changes in Unet-generated sVMI<sub>50keV</sub>. This study demonstrated the potential of transformer-based models for generating synthetic low-energy VMIs from SECT images, thereby improving the image quality of head and neck cancer imaging. It provides a practical and feasible solution to obtain low-energy VMIs from SECT data that can benefit a large number of facilities and patients without access to DECT technology.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"50 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140623826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Skin Cancer Image Segmentation Based on Midpoint Analysis Approach 基于中点分析方法的皮肤癌图像分离技术
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-16 DOI: 10.1007/s10278-024-01106-w
Uzma Saghir, Shailendra Kumar Singh, Moin Hasan

Skin cancer affects people of all ages and is a common disease. The death toll from skin cancer rises with a late diagnosis. An automated mechanism for early-stage skin cancer detection is required to diminish the mortality rate. Visual examination with scanning or imaging screening is a common mechanism for detecting this disease, but due to its similarity to other diseases, this mechanism shows the least accuracy. This article introduces an innovative segmentation mechanism that operates on the ISIC dataset to divide skin images into critical and non-critical sections. The main objective of the research is to segment lesions from dermoscopic skin images. The suggested framework is completed in two steps. The first step is to pre-process the image; for this, we have applied a bottom hat filter for hair removal and image enhancement by applying DCT and color coefficient. In the next phase, a background subtraction method with midpoint analysis is applied for segmentation to extract the region of interest and achieves an accuracy of 95.30%. The ground truth for the validation of segmentation is accomplished by comparing the segmented images with validation data provided with the ISIC dataset.

皮肤癌不分年龄,是一种常见疾病。皮肤癌的死亡率随着诊断的延迟而上升。为降低死亡率,需要一种早期皮肤癌自动检测机制。通过扫描或成像筛查进行目测是检测这种疾病的常见机制,但由于其与其他疾病的相似性,这种机制的准确性最低。本文介绍了一种创新的分割机制,该机制可在 ISIC 数据集上运行,将皮肤图像分为临界和非临界部分。研究的主要目的是从皮肤镜皮肤图像中分割病变。建议的框架分两步完成。第一步是对图像进行预处理;为此,我们应用了底帽滤波器来去除毛发,并通过应用 DCT 和色彩系数来增强图像。在下一阶段,我们采用背景减去法和中点分析法进行分割,以提取感兴趣区域,准确率达到 95.30%。通过将分割后的图像与 ISIC 数据集提供的验证数据进行比较,从而验证分割的基本事实。
{"title":"Skin Cancer Image Segmentation Based on Midpoint Analysis Approach","authors":"Uzma Saghir, Shailendra Kumar Singh, Moin Hasan","doi":"10.1007/s10278-024-01106-w","DOIUrl":"https://doi.org/10.1007/s10278-024-01106-w","url":null,"abstract":"<p>Skin cancer affects people of all ages and is a common disease. The death toll from skin cancer rises with a late diagnosis. An automated mechanism for early-stage skin cancer detection is required to diminish the mortality rate. Visual examination with scanning or imaging screening is a common mechanism for detecting this disease, but due to its similarity to other diseases, this mechanism shows the least accuracy. This article introduces an innovative segmentation mechanism that operates on the ISIC dataset to divide skin images into critical and non-critical sections. The main objective of the research is to segment lesions from dermoscopic skin images. The suggested framework is completed in two steps. The first step is to pre-process the image; for this, we have applied a bottom hat filter for hair removal and image enhancement by applying DCT and color coefficient. In the next phase, a background subtraction method with midpoint analysis is applied for segmentation to extract the region of interest and achieves an accuracy of 95.30%. The ground truth for the validation of segmentation is accomplished by comparing the segmented images with validation data provided with the ISIC dataset.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"58 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140616077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Structure Fusion Attention Model to Detect Architectural Distortion on Mammography 检测乳腺 X 射线摄影建筑变形的新型结构融合注意力模型
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-16 DOI: 10.1007/s10278-024-01085-y
Ting-Wei Ou, Tzu-Chieh Weng, Ruey-Feng Chang

Architectural distortion (AD) is one of the most common findings on mammograms, and it may represent not only cancer but also a lesion such as a radial scar that may have an associated cancer. AD accounts for 18–45% missed cancer, and the positive predictive value of AD is approximately 74.5%. Early detection of AD leads to early diagnosis and treatment of the cancer and improves the overall prognosis. However, detection of AD is a challenging task. In this work, we propose a new approach for detecting architectural distortion in mammography images by combining preprocessing methods and a novel structure fusion attention model. The proposed structure-focused weighted orientation preprocessing method is composed of the original image, the architecture enhancement map, and the weighted orientation map, highlighting suspicious AD locations. The proposed structure fusion attention model captures the information from different channels and outperforms other models in terms of false positives and top sensitivity, which refers to the maximum sensitivity that a model can achieve under the acceptance of the highest number of false positives, reaching 0.92 top sensitivity with only 0.6590 false positive per image. The findings suggest that the combination of preprocessing methods and a novel network architecture can lead to more accurate and reliable AD detection. Overall, the proposed approach offers a novel perspective on detecting ADs, and we believe that our method can be applied to clinical settings in the future, assisting radiologists in the early detection of ADs from mammography, ultimately leading to early treatment of breast cancer patients.

建筑变形(AD)是乳房 X 光检查中最常见的发现之一,它不仅可能代表癌症,也可能代表可能伴有癌症的病变,如放射状疤痕。AD占漏诊癌症的18%-45%,AD的阳性预测值约为74.5%。早期发现 AD 可使癌症得到早期诊断和治疗,并改善整体预后。然而,检测 AD 是一项具有挑战性的任务。在这项工作中,我们结合预处理方法和新型结构融合注意力模型,提出了一种检测乳腺 X 射线图像结构失真的新方法。所提出的以结构为重点的加权方向预处理方法由原始图像、结构增强图和加权方向图组成,可突出显示可疑的乳腺增生位置。所提出的结构融合注意力模型捕捉了来自不同通道的信息,在误报率和最高灵敏度(指模型在接受最高误报率的情况下所能达到的最高灵敏度)方面优于其他模型,最高灵敏度达到 0.92,而每幅图像的误报率仅为 0.6590。研究结果表明,将预处理方法与新型网络架构相结合,可以实现更准确、更可靠的注意力缺失检测。总之,所提出的方法为检测乳腺增生症提供了一个新的视角,我们相信我们的方法将来可以应用于临床,帮助放射科医生从乳腺 X 射线摄影中早期检测出乳腺增生症,最终实现乳腺癌患者的早期治疗。
{"title":"A Novel Structure Fusion Attention Model to Detect Architectural Distortion on Mammography","authors":"Ting-Wei Ou, Tzu-Chieh Weng, Ruey-Feng Chang","doi":"10.1007/s10278-024-01085-y","DOIUrl":"https://doi.org/10.1007/s10278-024-01085-y","url":null,"abstract":"<p>Architectural distortion (AD) is one of the most common findings on mammograms, and it may represent not only cancer but also a lesion such as a radial scar that may have an associated cancer. AD accounts for 18–45% missed cancer, and the positive predictive value of AD is approximately 74.5%. Early detection of AD leads to early diagnosis and treatment of the cancer and improves the overall prognosis. However, detection of AD is a challenging task. In this work, we propose a new approach for detecting architectural distortion in mammography images by combining preprocessing methods and a novel structure fusion attention model. The proposed structure-focused weighted orientation preprocessing method is composed of the original image, the architecture enhancement map, and the weighted orientation map, highlighting suspicious AD locations. The proposed structure fusion attention model captures the information from different channels and outperforms other models in terms of false positives and top sensitivity, which refers to the maximum sensitivity that a model can achieve under the acceptance of the highest number of false positives, reaching 0.92 top sensitivity with only 0.6590 false positive per image. The findings suggest that the combination of preprocessing methods and a novel network architecture can lead to more accurate and reliable AD detection. Overall, the proposed approach offers a novel perspective on detecting ADs, and we believe that our method can be applied to clinical settings in the future, assisting radiologists in the early detection of ADs from mammography, ultimately leading to early treatment of breast cancer patients.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"306 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140616243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of Machine Learning Models Using Diffusion-Weighted Images for Pathological Grade of Intrahepatic Mass-Forming Cholangiocarcinoma 使用弥散加权图像的机器学习模型对肝内肿块型胆管癌病理分级的比较
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-16 DOI: 10.1007/s10278-024-01103-z
Li-Hong Xing, Shu-Ping Wang, Li-Yong Zhuo, Yu Zhang, Jia-Ning Wang, Ze-Peng Ma, Ying-Jia Zhao, Shuang-Rui Yuan, Qian-He Zu, Xiao-Ping Yin

Is the radiomic approach, utilizing diffusion-weighted imaging (DWI), capable of predicting the various pathological grades of intrahepatic mass-forming cholangiocarcinoma (IMCC)? Furthermore, which model demonstrates superior performance among the diverse algorithms currently available? The objective of our study is to develop DWI radiomic models based on different machine learning algorithms and identify the optimal prediction model. We undertook a retrospective analysis of the DWI data of 77 patients with IMCC confirmed by pathological testing. Fifty-seven patients initially included in the study were randomly assigned to either the training set or the validation set in a ratio of 7:3. We established four different classifier models, namely random forest (RF), support vector machines (SVM), logistic regression (LR), and gradient boosting decision tree (GBDT), by manually contouring the region of interest and extracting prominent radiomic features. An external validation of the model was performed with the DWI data of 20 patients with IMCC who were subsequently included in the study. The area under the receiver operating curve (AUC), accuracy (ACC), precision (PRE), sensitivity (REC), and F1 score were used to evaluate the diagnostic performance of the model. Following the process of feature selection, a total of nine features were retained, with skewness being the most crucial radiomic feature demonstrating the highest diagnostic performance, followed by Gray Level Co-occurrence Matrix lmc1 (glcm-lmc1) and kurtosis, whose diagnostic performances were slightly inferior to skewness. Skewness and kurtosis showed a negative correlation with the pathological grading of IMCC, while glcm-lmc1 exhibited a positive correlation with the IMCC pathological grade. Compared with the other three models, the SVM radiomic model had the best diagnostic performance with an AUC of 0.957, an accuracy of 88.2%, a sensitivity of 85.7%, a precision of 85.7%, and an F1 score of 85.7% in the training set, as well as an AUC of 0.829, an accuracy of 76.5%, a sensitivity of 71.4%, a precision of 71.4%, and an F1 score of 71.4% in the external validation set. The DWI-based radiomic model proved to be efficacious in predicting the pathological grade of IMCC. The model with the SVM classifier algorithm had the best prediction efficiency and robustness. Consequently, this SVM-based model can be further explored as an option for a non-invasive preoperative prediction method in clinical practice.

利用弥散加权成像(DWI)的放射学方法能否预测肝内肿块型胆管癌(IMCC)的各种病理分级?此外,在目前可用的各种算法中,哪种模型表现出更优越的性能?我们的研究目的是根据不同的机器学习算法开发 DWI 放射线组学模型,并确定最佳预测模型。我们对 77 例经病理检测确诊的 IMCC 患者的 DWI 数据进行了回顾性分析。最初纳入研究的 57 例患者按 7:3 的比例随机分配到训练集或验证集。我们建立了四种不同的分类器模型,即随机森林(RF)、支持向量机(SVM)、逻辑回归(LR)和梯度提升决策树(GBDT),方法是手动勾画感兴趣区轮廓并提取突出的放射学特征。随后,研究人员利用 20 名 IMCC 患者的 DWI 数据对模型进行了外部验证。接受者操作曲线下面积(AUC)、准确度(ACC)、精确度(PRE)、灵敏度(REC)和 F1 分数用于评估模型的诊断性能。经过特征选择,共保留了九个特征,其中偏度是最关键的放射学特征,具有最高的诊断性能,其次是灰度级共现矩阵 lmc1(glcm-lmc1)和峰度,其诊断性能略低于偏度。偏度和峰度与 IMCC 病理分级呈负相关,而 glcm-lmc1 与 IMCC 病理分级呈正相关。与其他三种模型相比,SVM放射学模型的诊断性能最好,训练集的AUC为0.957,准确率为88.2%,灵敏度为85.7%,精确度为85.7%,F1得分为85.7%;外部验证集的AUC为0.829,准确率为76.5%,灵敏度为71.4%,精确度为71.4%,F1得分为71.4%。事实证明,基于 DWI 的放射学模型能有效预测 IMCC 的病理分级。采用 SVM 分类器算法的模型具有最佳的预测效率和鲁棒性。因此,这种基于 SVM 的模型可作为临床实践中一种无创的术前预测方法进行进一步探索。
{"title":"Comparison of Machine Learning Models Using Diffusion-Weighted Images for Pathological Grade of Intrahepatic Mass-Forming Cholangiocarcinoma","authors":"Li-Hong Xing, Shu-Ping Wang, Li-Yong Zhuo, Yu Zhang, Jia-Ning Wang, Ze-Peng Ma, Ying-Jia Zhao, Shuang-Rui Yuan, Qian-He Zu, Xiao-Ping Yin","doi":"10.1007/s10278-024-01103-z","DOIUrl":"https://doi.org/10.1007/s10278-024-01103-z","url":null,"abstract":"<p>Is the radiomic approach, utilizing diffusion-weighted imaging (DWI), capable of predicting the various pathological grades of intrahepatic mass-forming cholangiocarcinoma (IMCC)? Furthermore, which model demonstrates superior performance among the diverse algorithms currently available? The objective of our study is to develop DWI radiomic models based on different machine learning algorithms and identify the optimal prediction model. We undertook a retrospective analysis of the DWI data of 77 patients with IMCC confirmed by pathological testing. Fifty-seven patients initially included in the study were randomly assigned to either the training set or the validation set in a ratio of 7:3. We established four different classifier models, namely random forest (RF), support vector machines (SVM), logistic regression (LR), and gradient boosting decision tree (GBDT), by manually contouring the region of interest and extracting prominent radiomic features. An external validation of the model was performed with the DWI data of 20 patients with IMCC who were subsequently included in the study. The area under the receiver operating curve (AUC), accuracy (ACC), precision (PRE), sensitivity (REC), and F1 score were used to evaluate the diagnostic performance of the model. Following the process of feature selection, a total of nine features were retained, with skewness being the most crucial radiomic feature demonstrating the highest diagnostic performance, followed by Gray Level Co-occurrence Matrix lmc1 (glcm-lmc1) and kurtosis, whose diagnostic performances were slightly inferior to skewness. Skewness and kurtosis showed a negative correlation with the pathological grading of IMCC, while glcm-lmc1 exhibited a positive correlation with the IMCC pathological grade. Compared with the other three models, the SVM radiomic model had the best diagnostic performance with an AUC of 0.957, an accuracy of 88.2%, a sensitivity of 85.7%, a precision of 85.7%, and an F1 score of 85.7% in the training set, as well as an AUC of 0.829, an accuracy of 76.5%, a sensitivity of 71.4%, a precision of 71.4%, and an F1 score of 71.4% in the external validation set. The DWI-based radiomic model proved to be efficacious in predicting the pathological grade of IMCC. The model with the SVM classifier algorithm had the best prediction efficiency and robustness. Consequently, this SVM-based model can be further explored as an option for a non-invasive preoperative prediction method in clinical practice.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"43 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140616083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Study on Fine-Grained Visual Classification of Low-Resolution Urinary Erythrocyte 低分辨率尿红细胞的精细视觉分类研究
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-15 DOI: 10.1007/s10278-024-01082-1
Qingbo Ji, Tingshuo Yin, Pengfei Zhang, Qingquan Liu, Changbo Hou

The morphological analysis test item of urine red blood cells is referred to as “extracorporeal renal biopsy,” which holds significant importance for medical department testing. However, the accuracy of existing urine red blood cell morphology analyzers is suboptimal, and they are not widely utilized in medical examinations. Challenges include low image spatial resolution, blurred distinguishing features between cells, difficulty in fine-grained feature extraction, and insufficient data volume. This article aims to improve the classification accuracy of low-resolution urine red blood cells. This paper proposes a super-resolution method based on category-aware loss and an RBC-MIX data enhancement approach. It optimizes the cross-entropy loss to maximize the classification boundary and improve intra-class tightness and inter-class difference, achieving fine-grained classification of low-resolution urine red blood cells. Experimental outcomes demonstrate that with this method, an accuracy rate of 97.8% can be achieved for low-resolution urine red blood cell images. This algorithm attains outstanding classification performance for low-resolution urine red blood cells with only category labels required. This method can serve as a practical reference for urine red blood cell morphology examination items.

Graphical Abstract

尿红细胞形态分析检验项目被称为 "体外肾活检",对医疗部门的检验具有重要意义。然而,现有的尿红细胞形态分析仪的准确性并不理想,在医疗检查中并未得到广泛应用。其面临的挑战包括图像空间分辨率低、细胞间的区分特征模糊、精细特征提取困难以及数据量不足。本文旨在提高低分辨率尿液红细胞的分类准确性。本文提出了一种基于类别感知损失和 RBC-MIX 数据增强方法的超分辨率方法。该方法优化了交叉熵损失,最大化了分类边界,改善了类内紧密度和类间差异,实现了低分辨率尿红细胞的精细分类。实验结果表明,该方法对低分辨率尿红细胞图像的准确率可达 97.8%。该算法对低分辨率尿液红细胞的分类性能非常出色,只需要类别标签。该方法可作为尿红细胞形态检查项目的实用参考。 图文摘要
{"title":"Study on Fine-Grained Visual Classification of Low-Resolution Urinary Erythrocyte","authors":"Qingbo Ji, Tingshuo Yin, Pengfei Zhang, Qingquan Liu, Changbo Hou","doi":"10.1007/s10278-024-01082-1","DOIUrl":"https://doi.org/10.1007/s10278-024-01082-1","url":null,"abstract":"<p>The morphological analysis test item of urine red blood cells is referred to as “extracorporeal renal biopsy,” which holds significant importance for medical department testing. However, the accuracy of existing urine red blood cell morphology analyzers is suboptimal, and they are not widely utilized in medical examinations. Challenges include low image spatial resolution, blurred distinguishing features between cells, difficulty in fine-grained feature extraction, and insufficient data volume. This article aims to improve the classification accuracy of low-resolution urine red blood cells. This paper proposes a super-resolution method based on category-aware loss and an RBC-MIX data enhancement approach. It optimizes the cross-entropy loss to maximize the classification boundary and improve intra-class tightness and inter-class difference, achieving fine-grained classification of low-resolution urine red blood cells. Experimental outcomes demonstrate that with this method, an accuracy rate of 97.8% can be achieved for low-resolution urine red blood cell images. This algorithm attains outstanding classification performance for low-resolution urine red blood cells with only category labels required. This method can serve as a practical reference for urine red blood cell morphology examination items.</p><h3 data-test=\"abstract-sub-heading\">Graphical Abstract</h3>\u0000","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"65 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140590993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Digital Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1