首页 > 最新文献

Journal of imaging informatics in medicine最新文献

英文 中文
Retina Blood Vessels Segmentation and Classification with the Multi-featured Approach. 利用多特征方法进行视网膜血管分割和分类
Pub Date : 2024-08-08 DOI: 10.1007/s10278-024-01219-2
Usharani Bhimavarapu

Segmenting retinal blood vessels poses a significant challenge due to the irregularities inherent in small vessels. The complexity arises from the intricate task of effectively merging features at multiple levels, coupled with potential spatial information loss during successive down-sampling steps. This particularly affects the identification of small and faintly contrasting vessels. To address these challenges, we present a model tailored for automated arterial and venous (A/V) classification, complementing blood vessel segmentation. This paper presents an advanced methodology for segmenting and classifying retinal vessels using a series of sophisticated pre-processing and feature extraction techniques. The ensemble filter approach, incorporating Bilateral and Laplacian edge detectors, enhances image contrast and preserves edges. The proposed algorithm further refines the image by generating an orientation map. During the vessel extraction step, a complete convolution network processes the input image to create a detailed vessel map, enhanced by attention operations that improve modeling perception and resilience. The encoder extracts semantic features, while the Attention Module refines blood vessel depiction, resulting in highly accurate segmentation outcomes. The model was verified using the STARE dataset, which includes 400 images; the DRIVE dataset with 40 images; the HRF dataset with 45 images; and the INSPIRE-AVR dataset containing 40 images. The proposed model demonstrated superior performance across all datasets, achieving an accuracy of 97.5% on the DRIVE dataset, 99.25% on the STARE dataset, 98.33% on the INSPIREAVR dataset, and 98.67% on the HRF dataset. These results highlight the method's effectiveness in accurately segmenting and classifying retinal vessels.

由于小血管固有的不规则性,视网膜血管的分割是一项重大挑战。这种复杂性来自于有效合并多层次特征的复杂任务,以及连续下采样步骤中潜在的空间信息损失。这尤其影响了对对比度微弱的小血管的识别。为了应对这些挑战,我们提出了一个专门用于自动动脉和静脉(A/V)分类的模型,作为血管分割的补充。本文介绍了一种先进的方法,利用一系列复杂的预处理和特征提取技术对视网膜血管进行分割和分类。集合滤波器方法结合了双边和拉普拉斯边缘检测器,增强了图像对比度并保留了边缘。所提出的算法通过生成方向图进一步完善图像。在血管提取步骤中,一个完整的卷积网络会对输入图像进行处理,生成详细的血管图,并通过注意力操作来增强建模感知和复原能力。编码器提取语义特征,而注意力模块则完善血管描绘,从而获得高度准确的分割结果。该模型使用 STARE 数据集(包含 400 幅图像)、DRIVE 数据集(包含 40 幅图像)、HRF 数据集(包含 45 幅图像)和 INSPIRE-AVR 数据集(包含 40 幅图像)进行了验证。所提出的模型在所有数据集上都表现出卓越的性能,在 DRIVE 数据集上的准确率达到 97.5%,在 STARE 数据集上的准确率达到 99.25%,在 INSPIREAVR 数据集上的准确率达到 98.33%,在 HRF 数据集上的准确率达到 98.67%。这些结果凸显了该方法在准确分割和分类视网膜血管方面的有效性。
{"title":"Retina Blood Vessels Segmentation and Classification with the Multi-featured Approach.","authors":"Usharani Bhimavarapu","doi":"10.1007/s10278-024-01219-2","DOIUrl":"https://doi.org/10.1007/s10278-024-01219-2","url":null,"abstract":"<p><p>Segmenting retinal blood vessels poses a significant challenge due to the irregularities inherent in small vessels. The complexity arises from the intricate task of effectively merging features at multiple levels, coupled with potential spatial information loss during successive down-sampling steps. This particularly affects the identification of small and faintly contrasting vessels. To address these challenges, we present a model tailored for automated arterial and venous (A/V) classification, complementing blood vessel segmentation. This paper presents an advanced methodology for segmenting and classifying retinal vessels using a series of sophisticated pre-processing and feature extraction techniques. The ensemble filter approach, incorporating Bilateral and Laplacian edge detectors, enhances image contrast and preserves edges. The proposed algorithm further refines the image by generating an orientation map. During the vessel extraction step, a complete convolution network processes the input image to create a detailed vessel map, enhanced by attention operations that improve modeling perception and resilience. The encoder extracts semantic features, while the Attention Module refines blood vessel depiction, resulting in highly accurate segmentation outcomes. The model was verified using the STARE dataset, which includes 400 images; the DRIVE dataset with 40 images; the HRF dataset with 45 images; and the INSPIRE-AVR dataset containing 40 images. The proposed model demonstrated superior performance across all datasets, achieving an accuracy of 97.5% on the DRIVE dataset, 99.25% on the STARE dataset, 98.33% on the INSPIREAVR dataset, and 98.67% on the HRF dataset. These results highlight the method's effectiveness in accurately segmenting and classifying retinal vessels.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141908817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated 3D Cobb Angle Measurement Using U-Net in CT Images of Preoperative Scoliosis Patients. 利用 U-Net 在脊柱侧弯患者术前 CT 图像中自动进行三维 Cobb 角度测量
Pub Date : 2024-08-08 DOI: 10.1007/s10278-024-01211-w
Lening Li, Teng Zhang, Fan Lin, Yuting Li, Man-Sang Wong

To propose a deep learning framework "SpineCurve-net" for automated measuring the 3D Cobb angles from computed tomography (CT) images of presurgical scoliosis patients. A total of 116 scoliosis patients were analyzed, divided into a training set of 89 patients (average age 32.4 ± 24.5 years) and a validation set of 27 patients (average age 17.3 ± 5.8 years). Vertebral identification and curve fitting were achieved through U-net and NURBS-net and resulted in a Non-Uniform Rational B-Spline (NURBS) curve of the spine. The 3D Cobb angles were measured in two ways: the predicted 3D Cobb angle (PRED-3D-CA), which is the maximum value in the smoothed angle map derived from the NURBS curve, and the 2D mapping Cobb angle (MAP-2D-CA), which is the maximal angle formed by the tangent vectors along the projected 2D spinal curve. The model segmented spinal masks effectively, capturing easily missed vertebral bodies. Spoke kernel filtering distinguished vertebral regions, centralizing spinal curves. The SpineCurve Network method's Cobb angle (PRED-3D-CA and MAP-2D-CA) measurements correlated strongly with the surgeons' annotated Cobb angle (ground truth, GT) based on 2D radiographs, revealing high Pearson correlation coefficients of 0.983 and 0.934, respectively. This paper proposed an automated technique for calculating the 3D Cobb angle in preoperative scoliosis patients, yielding results that are highly correlated with traditional 2D Cobb angle measurements. Given its capacity to accurately represent the three-dimensional nature of spinal deformities, this method shows potential in aiding physicians to develop more precise surgical strategies in upcoming cases.

提出一种深度学习框架 "SpineCurve-net",用于自动测量手术前脊柱侧弯患者计算机断层扫描(CT)图像中的三维 Cobb 角。共分析了 116 名脊柱侧凸患者,分为 89 名患者(平均年龄为 32.4 ± 24.5 岁)的训练集和 27 名患者(平均年龄为 17.3 ± 5.8 岁)的验证集。通过 U-net 和 NURBS-net 实现了椎体识别和曲线拟合,得出了脊柱的非均匀有理 B-样条曲线(NURBS)。三维 Cobb 角通过两种方式测量:预测三维 Cobb 角(PRED-3D-CA),即从 NURBS 曲线得出的平滑角度图中的最大值;二维映射 Cobb 角(MAP-2D-CA),即沿投影二维脊柱曲线的切向量形成的最大角度。该模型能有效地分割脊柱掩膜,捕捉容易遗漏的椎体。辐核滤波可区分椎体区域,集中脊柱曲线。脊柱曲线网络方法的 Cobb 角(PRED-3D-CA 和 MAP-2D-CA)测量值与外科医生根据二维射线照片注释的 Cobb 角(地面实况,GT)密切相关,皮尔逊相关系数分别高达 0.983 和 0.934。本文提出了一种自动计算脊柱侧弯患者术前三维 Cobb 角的技术,其结果与传统的二维 Cobb 角测量结果高度相关。鉴于该方法能够准确地反映脊柱畸形的三维性质,它有望帮助医生在未来的病例中制定更精确的手术策略。
{"title":"Automated 3D Cobb Angle Measurement Using U-Net in CT Images of Preoperative Scoliosis Patients.","authors":"Lening Li, Teng Zhang, Fan Lin, Yuting Li, Man-Sang Wong","doi":"10.1007/s10278-024-01211-w","DOIUrl":"https://doi.org/10.1007/s10278-024-01211-w","url":null,"abstract":"<p><p>To propose a deep learning framework \"SpineCurve-net\" for automated measuring the 3D Cobb angles from computed tomography (CT) images of presurgical scoliosis patients. A total of 116 scoliosis patients were analyzed, divided into a training set of 89 patients (average age 32.4 ± 24.5 years) and a validation set of 27 patients (average age 17.3 ± 5.8 years). Vertebral identification and curve fitting were achieved through U-net and NURBS-net and resulted in a Non-Uniform Rational B-Spline (NURBS) curve of the spine. The 3D Cobb angles were measured in two ways: the predicted 3D Cobb angle (PRED-3D-CA), which is the maximum value in the smoothed angle map derived from the NURBS curve, and the 2D mapping Cobb angle (MAP-2D-CA), which is the maximal angle formed by the tangent vectors along the projected 2D spinal curve. The model segmented spinal masks effectively, capturing easily missed vertebral bodies. Spoke kernel filtering distinguished vertebral regions, centralizing spinal curves. The SpineCurve Network method's Cobb angle (PRED-3D-CA and MAP-2D-CA) measurements correlated strongly with the surgeons' annotated Cobb angle (ground truth, GT) based on 2D radiographs, revealing high Pearson correlation coefficients of 0.983 and 0.934, respectively. This paper proposed an automated technique for calculating the 3D Cobb angle in preoperative scoliosis patients, yielding results that are highly correlated with traditional 2D Cobb angle measurements. Given its capacity to accurately represent the three-dimensional nature of spinal deformities, this method shows potential in aiding physicians to develop more precise surgical strategies in upcoming cases.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141908816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEA-Net: Internal and External Dual-Attention Medical Segmentation Network with High-Performance Convolutional Blocks. IEA-Net:具有高性能卷积块的内外双关注医疗分割网络
Pub Date : 2024-08-06 DOI: 10.1007/s10278-024-01217-4
Bincheng Peng, Chao Fan

Currently, deep learning is developing rapidly in the field of image segmentation, and medical image segmentation is one of the key applications in this field. Conventional CNN has achieved great success in general medical image segmentation tasks, but it has feature loss in the feature extraction part and lacks the ability to explicitly model remote dependencies, which makes it difficult to adapt to the task of human organ segmentation. Although methods containing attention mechanisms have made good progress in the field of semantic segmentation, most of the current attention mechanisms are limited to a single sample, while the number of samples of human organ images is large, ignoring the correlation between the samples is not conducive to image segmentation. In order to solve these problems, an internal and external dual-attention segmentation network (IEA-Net) is proposed in this paper, and the ICSwR (interleaved convolutional system with residual) module and the IEAM module are designed in this network. The ICSwR contains interleaved convolution and hopping connection, which are used for the initial extraction of the features in the encoder part. The IEAM module (internal and external dual-attention module) consists of the LGGW-SA (local-global Gaussian-weighted self-attention) module and the EA module, which are in a tandem structure. The LGGW-SA module focuses on learning local-global feature correlations within individual samples for efficient feature extraction. Meanwhile, the EA module is designed to capture inter-sample connections, addressing multi-sample complexities. Additionally, skip connections will be incorporated into each IEAM module within both the encoder and decoder to reduce feature loss. We tested our method on the Synapse multi-organ segmentation dataset and the ACDC cardiac segmentation dataset, and the experimental results show that the proposed method achieves better performance than other state-of-the-art methods.

目前,深度学习在图像分割领域发展迅速,医学图像分割是该领域的重要应用之一。传统的 CNN 在一般的医学图像分割任务中取得了巨大的成功,但它在特征提取部分存在特征损失,并且缺乏对远距离依赖关系的显式建模能力,很难适应人体器官分割的任务。虽然包含注意力机制的方法在语义分割领域取得了不错的进展,但目前的注意力机制大多局限于单个样本,而人体器官图像样本数量庞大,忽略样本间的相关性不利于图像分割。为了解决这些问题,本文提出了一种内外双注意力分割网络(IEA-Net),并在该网络中设计了 ICSwR(带残差的交错卷积系统)模块和 IEAM 模块。ICSwR 包含交错卷积和跳转连接,用于编码器部分的特征初步提取。IEAM 模块(内部和外部双注意模块)由 LGGW-SA(局部-全局高斯加权自注意)模块和 EA 模块组成,两者采用串联结构。LGGW-SA 模块侧重于学习单个样本中的局部-全局特征相关性,以实现高效的特征提取。同时,EA 模块旨在捕捉样本间的联系,解决多样本复杂性问题。此外,编码器和解码器中的每个 IEAM 模块都将加入跳转连接,以减少特征丢失。我们在 Synapse 多器官分割数据集和 ACDC 心脏分割数据集上测试了我们的方法,实验结果表明,与其他最先进的方法相比,我们提出的方法取得了更好的性能。
{"title":"IEA-Net: Internal and External Dual-Attention Medical Segmentation Network with High-Performance Convolutional Blocks.","authors":"Bincheng Peng, Chao Fan","doi":"10.1007/s10278-024-01217-4","DOIUrl":"https://doi.org/10.1007/s10278-024-01217-4","url":null,"abstract":"<p><p>Currently, deep learning is developing rapidly in the field of image segmentation, and medical image segmentation is one of the key applications in this field. Conventional CNN has achieved great success in general medical image segmentation tasks, but it has feature loss in the feature extraction part and lacks the ability to explicitly model remote dependencies, which makes it difficult to adapt to the task of human organ segmentation. Although methods containing attention mechanisms have made good progress in the field of semantic segmentation, most of the current attention mechanisms are limited to a single sample, while the number of samples of human organ images is large, ignoring the correlation between the samples is not conducive to image segmentation. In order to solve these problems, an internal and external dual-attention segmentation network (IEA-Net) is proposed in this paper, and the ICSwR (interleaved convolutional system with residual) module and the IEAM module are designed in this network. The ICSwR contains interleaved convolution and hopping connection, which are used for the initial extraction of the features in the encoder part. The IEAM module (internal and external dual-attention module) consists of the LGGW-SA (local-global Gaussian-weighted self-attention) module and the EA module, which are in a tandem structure. The LGGW-SA module focuses on learning local-global feature correlations within individual samples for efficient feature extraction. Meanwhile, the EA module is designed to capture inter-sample connections, addressing multi-sample complexities. Additionally, skip connections will be incorporated into each IEAM module within both the encoder and decoder to reduce feature loss. We tested our method on the Synapse multi-organ segmentation dataset and the ACDC cardiac segmentation dataset, and the experimental results show that the proposed method achieves better performance than other state-of-the-art methods.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141895162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Airway and Airway Obstruction Site Segmentation Study Using U-Net with Drug-Induced Sleep Endoscopy Images. 利用 U-Net 与药物诱导睡眠内窥镜图像进行气道和气道阻塞部位分割研究。
Pub Date : 2024-08-05 DOI: 10.1007/s10278-024-01208-5
Yeong Hun Kang, Jin Youp Kim, Young Jae Kim, Sung Hyun Kim, Kwang Gi Kim, Chae-Seo Rhee

Obstructive sleep apnea is characterized by a decrease or cessation of breathing due to repetitive closure of the upper airway during sleep, leading to a decrease in blood oxygen saturation. In this study, employing a U-Net model, we utilized drug-induced sleep endoscopy images to segment the major causes of airway obstruction, including the epiglottis, oropharynx lateral walls, and tongue base. The evaluation metrics included sensitivity, specificity, accuracy, and Dice score, with airway sensitivity at 0.93 (± 0.06), specificity at 0.96 (± 0.01), accuracy at 0.95 (± 0.01), and Dice score at 0.84 (± 0.03), indicating overall high performance. The results indicate the potential for artificial intelligence (AI)-driven automatic interpretation of sleep disorder diagnosis, with implications for standardizing medical procedures and improving healthcare services. The study suggests that advancements in AI technology hold promise for enhancing diagnostic accuracy and treatment efficacy in sleep and respiratory disorders, fostering competitiveness in the medical AI market.

阻塞性睡眠呼吸暂停的特点是睡眠时上气道反复关闭导致呼吸减少或停止,从而导致血氧饱和度下降。在这项研究中,我们采用 U-Net 模型,利用药物诱导的睡眠内窥镜图像来分割气道阻塞的主要原因,包括会厌、口咽侧壁和舌根。评价指标包括灵敏度、特异性、准确性和 Dice 评分,其中气道灵敏度为 0.93(± 0.06),特异性为 0.96(± 0.01),准确性为 0.95(± 0.01),Dice 评分为 0.84(± 0.03),表明总体性能较高。研究结果表明,人工智能(AI)驱动的睡眠障碍诊断自动解释具有潜力,对规范医疗程序和改善医疗服务具有重要意义。研究表明,人工智能技术的进步有望提高睡眠和呼吸系统疾病的诊断准确性和治疗效果,从而增强医疗人工智能市场的竞争力。
{"title":"Airway and Airway Obstruction Site Segmentation Study Using U-Net with Drug-Induced Sleep Endoscopy Images.","authors":"Yeong Hun Kang, Jin Youp Kim, Young Jae Kim, Sung Hyun Kim, Kwang Gi Kim, Chae-Seo Rhee","doi":"10.1007/s10278-024-01208-5","DOIUrl":"https://doi.org/10.1007/s10278-024-01208-5","url":null,"abstract":"<p><p>Obstructive sleep apnea is characterized by a decrease or cessation of breathing due to repetitive closure of the upper airway during sleep, leading to a decrease in blood oxygen saturation. In this study, employing a U-Net model, we utilized drug-induced sleep endoscopy images to segment the major causes of airway obstruction, including the epiglottis, oropharynx lateral walls, and tongue base. The evaluation metrics included sensitivity, specificity, accuracy, and Dice score, with airway sensitivity at 0.93 (± 0.06), specificity at 0.96 (± 0.01), accuracy at 0.95 (± 0.01), and Dice score at 0.84 (± 0.03), indicating overall high performance. The results indicate the potential for artificial intelligence (AI)-driven automatic interpretation of sleep disorder diagnosis, with implications for standardizing medical procedures and improving healthcare services. The study suggests that advancements in AI technology hold promise for enhancing diagnostic accuracy and treatment efficacy in sleep and respiratory disorders, fostering competitiveness in the medical AI market.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141895159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
My Experience with Academy - Advancing Innovation in Imaging. 我在学院的经历--推进成像领域的创新。
Pub Date : 2024-08-05 DOI: 10.1007/s10278-024-01203-w
Mana Moassefi
{"title":"My Experience with Academy - Advancing Innovation in Imaging.","authors":"Mana Moassefi","doi":"10.1007/s10278-024-01203-w","DOIUrl":"https://doi.org/10.1007/s10278-024-01203-w","url":null,"abstract":"","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141895163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heatmap-Based Active Shape Model for Landmark Detection in Lumbar X-ray Images. 基于热图的主动形状模型用于腰椎 X 射线图像中的地标检测
Pub Date : 2024-08-05 DOI: 10.1007/s10278-024-01210-x
Minho Choi, Jun-Su Jang

Medical staff inspect lumbar X-ray images to diagnose lumbar spine diseases, and the analysis process is currently automated using deep-learning techniques. The detection of landmarks is necessary in the automatic process of localizing the position and identifying the morphological features of the vertebrae. However, detection errors may occur owing to the noise and ambiguity of images, as well as individual variations in the shape of the lumbar vertebrae. This study proposes a method to improve the robustness of landmark detection results. This method assumes that landmarks are detected by a convolutional neural network-based two-step model consisting of Pose-Net and M-Net. The model generates a heatmap response to indicate the probable landmark positions. The proposed method then corrects the landmark positions using the heatmap response and active shape model, which employs statistical information on the landmark distribution. Experiments were conducted using 3600 lumbar X-ray images, and the results showed that the landmark detection error was reduced by the proposed method. The average value of maximum errors decreased by 5.58% after applying the proposed method, which combines the outstanding image analysis capabilities of deep learning with statistical shape constraints on landmark distribution. The proposed method could also be easily integrated with other techniques to increase the robustness of landmark detection results such as CoordConv layers and non-directional part affinity field. This resulted in a further enhancement in the landmark detection performance. These advantages can improve the reliability of automatic systems used to inspect lumbar X-ray images. This will benefit both patients and medical staff by reducing medical expenses and increasing diagnostic efficiency.

医务人员通过检查腰椎 X 射线图像来诊断腰椎疾病,目前分析过程已利用深度学习技术实现自动化。在定位椎体位置和识别椎体形态特征的自动过程中,地标检测是必要的。然而,由于图像的噪声和模糊性,以及腰椎形状的个体差异,可能会出现检测错误。本研究提出了一种提高地标检测结果稳健性的方法。该方法假设地标由基于卷积神经网络的两步模型检测,该模型由 Pose-Net 和 M-Net 组成。该模型生成热图响应,以指示可能的地标位置。然后,建议的方法利用热图响应和主动形状模型修正地标位置,主动形状模型采用了地标分布的统计信息。实验使用了 3600 张腰椎 X 光图像,结果表明,所提出的方法减少了地标检测误差。该方法结合了深度学习出色的图像分析能力和对地标分布的统计形状约束,应用该方法后,最大误差的平均值降低了 5.58%。所提出的方法还可以轻松地与其他技术相结合,以提高地标检测结果的鲁棒性,如 CoordConv 层和非定向部分亲和场。这进一步提高了地标检测性能。这些优势可以提高用于检测腰椎 X 光图像的自动系统的可靠性。这将降低医疗费用,提高诊断效率,从而使患者和医务人员受益。
{"title":"Heatmap-Based Active Shape Model for Landmark Detection in Lumbar X-ray Images.","authors":"Minho Choi, Jun-Su Jang","doi":"10.1007/s10278-024-01210-x","DOIUrl":"https://doi.org/10.1007/s10278-024-01210-x","url":null,"abstract":"<p><p>Medical staff inspect lumbar X-ray images to diagnose lumbar spine diseases, and the analysis process is currently automated using deep-learning techniques. The detection of landmarks is necessary in the automatic process of localizing the position and identifying the morphological features of the vertebrae. However, detection errors may occur owing to the noise and ambiguity of images, as well as individual variations in the shape of the lumbar vertebrae. This study proposes a method to improve the robustness of landmark detection results. This method assumes that landmarks are detected by a convolutional neural network-based two-step model consisting of Pose-Net and M-Net. The model generates a heatmap response to indicate the probable landmark positions. The proposed method then corrects the landmark positions using the heatmap response and active shape model, which employs statistical information on the landmark distribution. Experiments were conducted using 3600 lumbar X-ray images, and the results showed that the landmark detection error was reduced by the proposed method. The average value of maximum errors decreased by 5.58% after applying the proposed method, which combines the outstanding image analysis capabilities of deep learning with statistical shape constraints on landmark distribution. The proposed method could also be easily integrated with other techniques to increase the robustness of landmark detection results such as CoordConv layers and non-directional part affinity field. This resulted in a further enhancement in the landmark detection performance. These advantages can improve the reliability of automatic systems used to inspect lumbar X-ray images. This will benefit both patients and medical staff by reducing medical expenses and increasing diagnostic efficiency.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141895161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DEAF-Net: Detail-Enhanced Attention Feature Fusion Network for Retinal Vessel Segmentation. DEAF-Net:用于视网膜血管分割的细节增强注意力特征融合网络
Pub Date : 2024-08-05 DOI: 10.1007/s10278-024-01207-6
Pengfei Cai, Biyuan Li, Gaowei Sun, Bo Yang, Xiuwei Wang, Chunjie Lv, Jun Yan

Retinal vessel segmentation is crucial for the diagnosis of ophthalmic and cardiovascular diseases. However, retinal vessels are densely and irregularly distributed, with many capillaries blending into the background, and exhibit low contrast. Moreover, the encoder-decoder-based network for retinal vessel segmentation suffers from irreversible loss of detailed features due to multiple encoding and decoding, leading to incorrect segmentation of the vessels. Meanwhile, the single-dimensional attention mechanisms possess limitations, neglecting the importance of multidimensional features. To solve these issues, in this paper, we propose a detail-enhanced attention feature fusion network (DEAF-Net) for retinal vessel segmentation. First, the detail-enhanced residual block (DERB) module is proposed to strengthen the capacity for detailed representation, ensuring that intricate features are efficiently maintained during the segmentation of delicate vessels. Second, the multidimensional collaborative attention encoder (MCAE) module is proposed to optimize the extraction of multidimensional information. Then, the dynamic decoder (DYD) module is introduced to preserve spatial information during the decoding process and reduce the information loss caused by upsampling operations. Finally, the proposed detail-enhanced feature fusion (DEFF) module composed of DERB, MCAE and DYD modules fuses feature maps from both encoding and decoding and achieves effective aggregation of multi-scale contextual information. The experiments conducted on the datasets of DRIVE, CHASEDB1, and STARE, achieving Sen of 0.8305, 0.8784, and 0.8654, and AUC of 0.9886, 0.9913, and 0.9911 on DRIVE, CHASEDB1, and STARE, respectively, demonstrate the performance of our proposed network, particularly in the segmentation of fine retinal vessels.

视网膜血管分割对眼科和心血管疾病的诊断至关重要。然而,视网膜血管分布密集且不规则,许多毛细血管与背景融为一体,而且对比度低。此外,基于编码器-解码器的视网膜血管分割网络会因多次编码和解码而不可逆转地丢失细节特征,导致血管分割错误。同时,单维注意力机制存在局限性,忽略了多维特征的重要性。为了解决这些问题,本文提出了一种用于视网膜血管分割的细节增强注意力特征融合网络(DEAF-Net)。首先,我们提出了细节增强残差块(DERB)模块,以加强细节表示能力,确保在分割精细血管时有效保留复杂特征。其次,提出了多维协同注意编码器(MCAE)模块,以优化多维信息的提取。然后,引入动态解码器(DYD)模块,在解码过程中保留空间信息,减少上采样操作造成的信息损失。最后,由 DERB、MCAE 和 DYD 模块组成的细节增强特征融合(DEFF)模块融合了编码和解码的特征图,实现了多尺度上下文信息的有效聚合。在 DRIVE、CHASEDB1 和 STARE 数据集上进行的实验证明了我们所提出的网络的性能,尤其是在细小视网膜血管的分割上,其 Sen 值分别达到了 0.8305、0.8784 和 0.8654,AUC 值分别达到了 0.9886、0.9913 和 0.9911。
{"title":"DEAF-Net: Detail-Enhanced Attention Feature Fusion Network for Retinal Vessel Segmentation.","authors":"Pengfei Cai, Biyuan Li, Gaowei Sun, Bo Yang, Xiuwei Wang, Chunjie Lv, Jun Yan","doi":"10.1007/s10278-024-01207-6","DOIUrl":"https://doi.org/10.1007/s10278-024-01207-6","url":null,"abstract":"<p><p>Retinal vessel segmentation is crucial for the diagnosis of ophthalmic and cardiovascular diseases. However, retinal vessels are densely and irregularly distributed, with many capillaries blending into the background, and exhibit low contrast. Moreover, the encoder-decoder-based network for retinal vessel segmentation suffers from irreversible loss of detailed features due to multiple encoding and decoding, leading to incorrect segmentation of the vessels. Meanwhile, the single-dimensional attention mechanisms possess limitations, neglecting the importance of multidimensional features. To solve these issues, in this paper, we propose a detail-enhanced attention feature fusion network (DEAF-Net) for retinal vessel segmentation. First, the detail-enhanced residual block (DERB) module is proposed to strengthen the capacity for detailed representation, ensuring that intricate features are efficiently maintained during the segmentation of delicate vessels. Second, the multidimensional collaborative attention encoder (MCAE) module is proposed to optimize the extraction of multidimensional information. Then, the dynamic decoder (DYD) module is introduced to preserve spatial information during the decoding process and reduce the information loss caused by upsampling operations. Finally, the proposed detail-enhanced feature fusion (DEFF) module composed of DERB, MCAE and DYD modules fuses feature maps from both encoding and decoding and achieves effective aggregation of multi-scale contextual information. The experiments conducted on the datasets of DRIVE, CHASEDB1, and STARE, achieving Sen of 0.8305, 0.8784, and 0.8654, and AUC of 0.9886, 0.9913, and 0.9911 on DRIVE, CHASEDB1, and STARE, respectively, demonstrate the performance of our proposed network, particularly in the segmentation of fine retinal vessels.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141895160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comparative Study of Performance Between Federated Learning and Centralized Learning Using Pathological Image of Endometrial Cancer. 利用子宫内膜癌病理图像进行联合学习与集中学习的性能比较研究
Pub Date : 2024-08-01 Epub Date: 2024-02-21 DOI: 10.1007/s10278-024-01020-1
Jong Chan Yeom, Jae Hoon Kim, Young Jae Kim, Jisup Kim, Kwang Gi Kim

Federated learning, an innovative artificial intelligence training method, offers a secure solution for institutions to collaboratively develop models without sharing raw data. This approach offers immense promise and is particularly advantageous for domains dealing with sensitive information, such as patient data. However, when confronted with a distributed data environment, challenges arise due to data paucity or inherent heterogeneity, potentially impacting the performance of federated learning models. Hence, scrutinizing the efficacy of this method in such intricate settings is indispensable. To address this, we harnessed pathological image datasets of endometrial cancer from four hospitals for training and evaluating the performance of a federated learning model and compared it with a centralized learning model. With optimal processing techniques (data augmentation, color normalization, and adaptive optimizer), federated learning exhibited lower precision but higher recall and Dice similarity coefficient (DSC) than centralized learning. Hence, considering the critical importance of recall in the context of medical image processing, federated learning is demonstrated as a viable and applicable approach in this field, offering advantages in terms of both performance and data security.

联合学习是一种创新的人工智能训练方法,它为机构在不共享原始数据的情况下合作开发模型提供了一种安全的解决方案。这种方法前景广阔,对于处理病人数据等敏感信息的领域尤其有利。然而,在面对分布式数据环境时,由于数据匮乏或固有的异质性,可能会影响联合学习模型的性能,从而带来挑战。因此,仔细研究这种方法在这种错综复杂的环境中的功效是必不可少的。为此,我们利用四家医院的子宫内膜癌病理图像数据集来训练和评估联合学习模型的性能,并将其与集中学习模型进行比较。在采用最佳处理技术(数据增强、颜色归一化和自适应优化器)的情况下,联合学习的精确度低于集中学习,但召回率和戴斯相似系数(DSC)却高于集中学习。因此,考虑到召回率在医学图像处理中的极端重要性,联合学习被证明是该领域中一种可行且适用的方法,在性能和数据安全方面都具有优势。
{"title":"A Comparative Study of Performance Between Federated Learning and Centralized Learning Using Pathological Image of Endometrial Cancer.","authors":"Jong Chan Yeom, Jae Hoon Kim, Young Jae Kim, Jisup Kim, Kwang Gi Kim","doi":"10.1007/s10278-024-01020-1","DOIUrl":"10.1007/s10278-024-01020-1","url":null,"abstract":"<p><p>Federated learning, an innovative artificial intelligence training method, offers a secure solution for institutions to collaboratively develop models without sharing raw data. This approach offers immense promise and is particularly advantageous for domains dealing with sensitive information, such as patient data. However, when confronted with a distributed data environment, challenges arise due to data paucity or inherent heterogeneity, potentially impacting the performance of federated learning models. Hence, scrutinizing the efficacy of this method in such intricate settings is indispensable. To address this, we harnessed pathological image datasets of endometrial cancer from four hospitals for training and evaluating the performance of a federated learning model and compared it with a centralized learning model. With optimal processing techniques (data augmentation, color normalization, and adaptive optimizer), federated learning exhibited lower precision but higher recall and Dice similarity coefficient (DSC) than centralized learning. Hence, considering the critical importance of recall in the context of medical image processing, federated learning is demonstrated as a viable and applicable approach in this field, offering advantages in terms of both performance and data security.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300724/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139914396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-Dose CT Image Super-resolution Network with Noise Inhibition Based on Feedback Feature Distillation Mechanism. 基于反馈特征蒸馏机制的低剂量 CT 图像超分辨率网络抑制噪声
Pub Date : 2024-08-01 Epub Date: 2024-02-20 DOI: 10.1007/s10278-024-00979-1
Jianning Chi, Xiaolin Wei, Zhiyi Sun, Yongming Yang, Bin Yang

Low-dose computed tomography (LDCT) has been widely used in medical diagnosis. In practice, doctors often zoom in on LDCT slices for clearer lesions and issues, while, a simple zooming operation fails to suppress low-dose artifacts, leading to distorted details. Therefore, numerous LDCT super-resolution (SR) methods have been proposed to promote the quality of zooming without the increase of the dose in CT scanning. However, there are still some drawbacks that need to be addressed in existing methods. First, the region of interest (ROI) is not emphasized due to the lack of guidance in the reconstruction process. Second, the convolutional blocks extracting fix-resolution features fail to concentrate on the essential multi-scale features. Third, a single SR head cannot suppress the residual artifacts. To address these issues, we propose an LDCT CT joint SR and denoising reconstruction network. Our proposed network consists of global dual-guidance attention fusion modules (GDAFMs) and multi-scale anastomosis blocks (MABs). The GDAFM directs the network to focus on ROI by fusing the extra mask guidance and average CT image guidance, while the MAB introduces hierarchical features through anastomosis connections to leverage multi-scale features and promote the feature representation ability. To suppress radial residual artifacts, we optimize our network using the feedback feature distillation mechanism (FFDM) which shares the backbone to learn features corresponding to the denoising task. We apply the proposed method to the 3D-IRCADB and PANCREAS datasets to evaluate its ability on LDCT image SR reconstruction. The experimental results compared with state-of-the-art methods illustrate the superiority of our approach with respect to peak signal-to-noise (PSNR), structural similarity (SSIM), and qualitative observations. Our proposed LDCT joint SR and denoising reconstruction network has been extensively evaluated through ablation, quantitative, and qualitative experiments. The results demonstrate that our method can recover noise-free and detail-sharp images, resulting in better reconstruction results. Code is available at https://github.com/neu-szy/ldct_sr_dn_w_ffdm .

低剂量计算机断层扫描(LDCT)已广泛应用于医学诊断。在实践中,医生经常会放大 LDCT 切片,以获得更清晰的病变和问题,而简单的放大操作无法抑制低剂量伪影,导致细节失真。因此,人们提出了许多 LDCT 超分辨率(SR)方法,以在不增加 CT 扫描剂量的情况下提高缩放质量。然而,现有方法仍有一些缺点需要解决。首先,由于在重建过程中缺乏指导,感兴趣区域(ROI)没有得到强调。其次,提取固定分辨率特征的卷积块不能集中在重要的多尺度特征上。第三,单个 SR 头无法抑制残留伪影。为了解决这些问题,我们提出了一种 LDCT CT 联合 SR 和去噪重建网络。我们提出的网络由全局双导向注意力融合模块(GDAFM)和多尺度吻合块(MAB)组成。GDAFM 通过融合额外掩膜引导和平均 CT 图像引导,引导网络聚焦于 ROI,而 MAB 则通过吻合连接引入分层特征,以利用多尺度特征并提高特征表示能力。为了抑制径向残留伪影,我们使用反馈特征蒸馏机制(FFDM)优化网络,该机制共享骨干网,以学习与去噪任务相对应的特征。我们将所提出的方法应用于 3D-IRCADB 和 PANCREAS 数据集,以评估其在 LDCT 图像 SR 重建方面的能力。与最先进的方法相比,实验结果表明我们的方法在峰值信噪比(PSNR)、结构相似性(SSIM)和定性观察方面更胜一筹。我们提出的 LDCT 联合 SR 和去噪重建网络已通过消融、定量和定性实验进行了广泛评估。结果表明,我们的方法可以恢复无噪声和细节清晰的图像,从而获得更好的重建效果。代码见 https://github.com/neu-szy/ldct_sr_dn_w_ffdm 。
{"title":"Low-Dose CT Image Super-resolution Network with Noise Inhibition Based on Feedback Feature Distillation Mechanism.","authors":"Jianning Chi, Xiaolin Wei, Zhiyi Sun, Yongming Yang, Bin Yang","doi":"10.1007/s10278-024-00979-1","DOIUrl":"10.1007/s10278-024-00979-1","url":null,"abstract":"<p><p>Low-dose computed tomography (LDCT) has been widely used in medical diagnosis. In practice, doctors often zoom in on LDCT slices for clearer lesions and issues, while, a simple zooming operation fails to suppress low-dose artifacts, leading to distorted details. Therefore, numerous LDCT super-resolution (SR) methods have been proposed to promote the quality of zooming without the increase of the dose in CT scanning. However, there are still some drawbacks that need to be addressed in existing methods. First, the region of interest (ROI) is not emphasized due to the lack of guidance in the reconstruction process. Second, the convolutional blocks extracting fix-resolution features fail to concentrate on the essential multi-scale features. Third, a single SR head cannot suppress the residual artifacts. To address these issues, we propose an LDCT CT joint SR and denoising reconstruction network. Our proposed network consists of global dual-guidance attention fusion modules (GDAFMs) and multi-scale anastomosis blocks (MABs). The GDAFM directs the network to focus on ROI by fusing the extra mask guidance and average CT image guidance, while the MAB introduces hierarchical features through anastomosis connections to leverage multi-scale features and promote the feature representation ability. To suppress radial residual artifacts, we optimize our network using the feedback feature distillation mechanism (FFDM) which shares the backbone to learn features corresponding to the denoising task. We apply the proposed method to the 3D-IRCADB and PANCREAS datasets to evaluate its ability on LDCT image SR reconstruction. The experimental results compared with state-of-the-art methods illustrate the superiority of our approach with respect to peak signal-to-noise (PSNR), structural similarity (SSIM), and qualitative observations. Our proposed LDCT joint SR and denoising reconstruction network has been extensively evaluated through ablation, quantitative, and qualitative experiments. The results demonstrate that our method can recover noise-free and detail-sharp images, resulting in better reconstruction results. Code is available at https://github.com/neu-szy/ldct_sr_dn_w_ffdm .</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300784/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139914403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Radiomics Analysis of CT Imaging for Differentiating Between Crohn's Disease and Intestinal Tuberculosis. 用于区分克罗恩病和肠结核的 CT 图像深度学习放射组学分析。
Pub Date : 2024-08-01 Epub Date: 2024-02-29 DOI: 10.1007/s10278-024-01059-0
Ming Cheng, Hanyue Zhang, Wenpeng Huang, Fei Li, Jianbo Gao

This study aimed to develop and evaluate a CT-based deep learning radiomics model for differentiating between Crohn's disease (CD) and intestinal tuberculosis (ITB). A total of 330 patients with pathologically confirmed as CD or ITB from the First Affiliated Hospital of Zhengzhou University were divided into the validation dataset one (CD: 167; ITB: 57) and validation dataset two (CD: 78; ITB: 28). Based on the validation dataset one, the synthetic minority oversampling technique (SMOTE) was adopted to create balanced dataset as training data for feature selection and model construction. The handcrafted and deep learning (DL) radiomics features were extracted from the arterial and venous phases images, respectively. The interobserver consistency analysis, Spearman's correlation, univariate analysis, and the least absolute shrinkage and selection operator (LASSO) regression were used to select features. Based on extracted multi-phase radiomics features, six logistic regression models were finally constructed. The diagnostic performances of different models were compared using ROC analysis and Delong test. The arterial-venous combined deep learning radiomics model for differentiating between CD and ITB showed a high prediction quality with AUCs of 0.885, 0.877, and 0.800 in SMOTE dataset, validation dataset one, and validation dataset two, respectively. Moreover, the deep learning radiomics model outperformed the handcrafted radiomics model in same phase images. In validation dataset one, the Delong test results indicated that there was a significant difference in the AUC of the arterial models (p = 0.037), while not in venous and arterial-venous combined models (p = 0.398 and p = 0.265) as comparing deep learning radiomics models and handcrafted radiomics models. In our study, the arterial-venous combined model based on deep learning radiomics analysis exhibited good performance in differentiating between CD and ITB.

本研究旨在开发和评估一种基于CT的深度学习放射组学模型,用于区分克罗恩病(CD)和肠结核(ITB)。研究将郑州大学第一附属医院的330例病理确诊为克罗恩病或肠结核的患者分为验证数据集一(CD:167例;ITB:57例)和验证数据集二(CD:78例;ITB:28例)。在验证数据集一的基础上,采用合成少数超采样技术(SMOTE)创建均衡数据集,作为特征选择和模型构建的训练数据。分别从动脉期和静脉期图像中提取手工和深度学习(DL)放射组学特征。采用观察者间一致性分析、斯皮尔曼相关性分析、单变量分析和最小绝对收缩和选择算子(LASSO)回归来选择特征。根据提取的多相放射组学特征,最终构建了六个逻辑回归模型。利用 ROC 分析和 Delong 检验比较了不同模型的诊断性能。用于区分 CD 和 ITB 的动静脉联合深度学习放射组学模型显示出较高的预测质量,在 SMOTE 数据集、验证数据集一和验证数据集二中的 AUC 分别为 0.885、0.877 和 0.800。此外,在同相位图像中,深度学习放射组学模型的表现优于手工制作的放射组学模型。在验证数据集一中,Delong 检验结果表明,动脉模型的 AUC 有显著差异(p = 0.037),而静脉模型和动静脉联合模型的 AUC 没有显著差异(p = 0.398 和 p = 0.265)。在我们的研究中,基于深度学习放射组学分析的动静脉联合模型在区分 CD 和 ITB 方面表现出良好的性能。
{"title":"Deep Learning Radiomics Analysis of CT Imaging for Differentiating Between Crohn's Disease and Intestinal Tuberculosis.","authors":"Ming Cheng, Hanyue Zhang, Wenpeng Huang, Fei Li, Jianbo Gao","doi":"10.1007/s10278-024-01059-0","DOIUrl":"10.1007/s10278-024-01059-0","url":null,"abstract":"<p><p>This study aimed to develop and evaluate a CT-based deep learning radiomics model for differentiating between Crohn's disease (CD) and intestinal tuberculosis (ITB). A total of 330 patients with pathologically confirmed as CD or ITB from the First Affiliated Hospital of Zhengzhou University were divided into the validation dataset one (CD: 167; ITB: 57) and validation dataset two (CD: 78; ITB: 28). Based on the validation dataset one, the synthetic minority oversampling technique (SMOTE) was adopted to create balanced dataset as training data for feature selection and model construction. The handcrafted and deep learning (DL) radiomics features were extracted from the arterial and venous phases images, respectively. The interobserver consistency analysis, Spearman's correlation, univariate analysis, and the least absolute shrinkage and selection operator (LASSO) regression were used to select features. Based on extracted multi-phase radiomics features, six logistic regression models were finally constructed. The diagnostic performances of different models were compared using ROC analysis and Delong test. The arterial-venous combined deep learning radiomics model for differentiating between CD and ITB showed a high prediction quality with AUCs of 0.885, 0.877, and 0.800 in SMOTE dataset, validation dataset one, and validation dataset two, respectively. Moreover, the deep learning radiomics model outperformed the handcrafted radiomics model in same phase images. In validation dataset one, the Delong test results indicated that there was a significant difference in the AUC of the arterial models (p = 0.037), while not in venous and arterial-venous combined models (p = 0.398 and p = 0.265) as comparing deep learning radiomics models and handcrafted radiomics models. In our study, the arterial-venous combined model based on deep learning radiomics analysis exhibited good performance in differentiating between CD and ITB.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300798/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139998779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of imaging informatics in medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1