首页 > 最新文献

Journal of Digital Imaging最新文献

英文 中文
Polyp Segmentation Using a Hybrid Vision Transformer and a Hybrid Loss Function 使用混合视觉变换器和混合损失函数进行息肉分割
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-12 DOI: 10.1007/s10278-023-00954-2
Evgin Goceri

Accurate and early detection of precursor adenomatous polyps and their removal at the early stage can significantly decrease the mortality rate and the occurrence of the disease since most colorectal cancer evolve from adenomatous polyps. However, accurate detection and segmentation of the polyps by doctors are difficult mainly these factors: (i) quality of the screening of the polyps with colonoscopy depends on the imaging quality and the experience of the doctors; (ii) visual inspection by doctors is time-consuming, burdensome, and tiring; (iii) prolonged visual inspections can lead to polyps being missed even when the physician is experienced. To overcome these problems, computer-aided methods have been proposed. However, they have some disadvantages or limitations. Therefore, in this work, a new architecture based on residual transformer layers has been designed and used for polyp segmentation. In the proposed segmentation, both high-level semantic features and low-level spatial features have been utilized. Also, a novel hybrid loss function has been proposed. The loss function designed with focal Tversky loss, binary cross-entropy, and Jaccard index reduces image-wise and pixel-wise differences as well as improves regional consistencies. Experimental works have indicated the effectiveness of the proposed approach in terms of dice similarity (0.9048), recall (0.9041), precision (0.9057), and F2 score (0.8993). Comparisons with the state-of-the-art methods have shown its better performance.

由于大多数结肠直肠癌都是由腺瘤性息肉演变而来,因此准确、早期地发现前驱腺瘤性息肉并在早期将其切除,可以大大降低死亡率,减少疾病的发生。然而,医生很难对息肉进行准确检测和分割,主要有以下几个因素:(i) 结肠镜筛查息肉的质量取决于成像质量和医生的经验;(ii) 医生的目视检查费时、费力、费神;(iii) 即使医生经验丰富,长时间的目视检查也可能导致漏检息肉。为了克服这些问题,人们提出了计算机辅助方法。然而,这些方法都有一些缺点或局限性。因此,在这项工作中,我们设计了一种基于残差变压器层的新架构,并将其用于息肉分割。在提议的分割中,既利用了高级语义特征,也利用了低级空间特征。此外,还提出了一种新型混合损失函数。利用焦点 Tversky 损失、二元交叉熵和 Jaccard 指数设计的损失函数可减少图像和像素的差异,并提高区域一致性。实验结果表明,所提出的方法在骰子相似度(0.9048)、召回率(0.9041)、精确度(0.9057)和 F2 分数(0.8993)方面都很有效。与最先进方法的比较表明,该方法具有更好的性能。
{"title":"Polyp Segmentation Using a Hybrid Vision Transformer and a Hybrid Loss Function","authors":"Evgin Goceri","doi":"10.1007/s10278-023-00954-2","DOIUrl":"https://doi.org/10.1007/s10278-023-00954-2","url":null,"abstract":"<p>Accurate and early detection of precursor adenomatous polyps and their removal at the early stage can significantly decrease the mortality rate and the occurrence of the disease since most colorectal cancer evolve from adenomatous polyps. However, accurate detection and segmentation of the polyps by doctors are difficult mainly these factors: (i) quality of the screening of the polyps with colonoscopy depends on the imaging quality and the experience of the doctors; (ii) visual inspection by doctors is time-consuming, burdensome, and tiring; (iii) prolonged visual inspections can lead to polyps being missed even when the physician is experienced. To overcome these problems, computer-aided methods have been proposed. However, they have some disadvantages or limitations. Therefore, in this work, a new architecture based on residual transformer layers has been designed and used for polyp segmentation. In the proposed segmentation, both high-level semantic features and low-level spatial features have been utilized. Also, a novel hybrid loss function has been proposed. The loss function designed with focal Tversky loss, binary cross-entropy, and Jaccard index reduces image-wise and pixel-wise differences as well as improves regional consistencies. Experimental works have indicated the effectiveness of the proposed approach in terms of dice similarity (0.9048), recall (0.9041), precision (0.9057), and F2 score (0.8993). Comparisons with the state-of-the-art methods have shown its better performance.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"28 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of Local Software for Automatic Measurement of Geometric Parameters in the Proximal Femur Using a Combination of a Deep Learning Approach and an Active Shape Model on X-ray Images 利用深度学习方法和 X 射线图像主动形状模型的组合,开发自动测量股骨近端几何参数的本地软件
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-12 DOI: 10.1007/s10278-023-00953-3
Hamid Alavi, Mehdi Seifi, Mahboubeh Rouhollahei, Mehravar Rafati, Masoud Arabfard

Proximal femur geometry is an important risk factor for diagnosing and predicting hip and femur injuries. Hence, the development of an automated approach for measuring these parameters could help physicians with the early identification of hip and femur ailments. This paper presents a technique that combines the active shape model (ASM) and deep learning methodologies. First, the femur boundary is extracted by a deep learning neural network. Then, the femur’s anatomical landmarks are fitted to the extracted border using the ASM method. Finally, the geometric parameters of the proximal femur, including femur neck axis length (FNAL), femur head diameter (FHD), femur neck width (FNW), shaft width (SW), neck shaft angle (NSA), and alpha angle (AA), are calculated by measuring the distances and angles between the landmarks. The dataset of hip radiographic images consisted of 428 images, with 208 men and 220 women. These images were split into training and testing sets for analysis. The deep learning network and ASM were subsequently trained on the training dataset. In the testing dataset, the automatic measurement of FNAL, FHD, FNW, SW, NSA, and AA parameters resulted in mean errors of 1.19%, 1.46%, 2.28%, 2.43%, 1.95%, and 4.53%, respectively.

股骨近端几何形状是诊断和预测髋关节和股骨损伤的重要风险因素。因此,开发一种自动测量这些参数的方法可以帮助医生早期识别髋关节和股骨疾病。本文介绍了一种结合主动形状模型(ASM)和深度学习方法的技术。首先,通过深度学习神经网络提取股骨边界。然后,使用 ASM 方法将股骨的解剖地标与提取的边界拟合。最后,通过测量地标之间的距离和角度,计算股骨近端的几何参数,包括股骨颈轴长(FNAL)、股骨头直径(FHD)、股骨颈宽(FNW)、轴宽(SW)、颈轴角(NSA)和α角(AA)。髋关节放射影像数据集包括 428 张影像,其中男性 208 张,女性 220 张。这些图像被分成训练集和测试集进行分析。随后在训练数据集上对深度学习网络和 ASM 进行了训练。在测试数据集中,自动测量 FNAL、FHD、FNW、SW、NSA 和 AA 参数的平均误差分别为 1.19%、1.46%、2.28%、2.43%、1.95% 和 4.53%。
{"title":"Development of Local Software for Automatic Measurement of Geometric Parameters in the Proximal Femur Using a Combination of a Deep Learning Approach and an Active Shape Model on X-ray Images","authors":"Hamid Alavi, Mehdi Seifi, Mahboubeh Rouhollahei, Mehravar Rafati, Masoud Arabfard","doi":"10.1007/s10278-023-00953-3","DOIUrl":"https://doi.org/10.1007/s10278-023-00953-3","url":null,"abstract":"<p>Proximal femur geometry is an important risk factor for diagnosing and predicting hip and femur injuries. Hence, the development of an automated approach for measuring these parameters could help physicians with the early identification of hip and femur ailments. This paper presents a technique that combines the active shape model (ASM) and deep learning methodologies. First, the femur boundary is extracted by a deep learning neural network. Then, the femur’s anatomical landmarks are fitted to the extracted border using the ASM method. Finally, the geometric parameters of the proximal femur, including femur neck axis length (FNAL), femur head diameter (FHD), femur neck width (FNW), shaft width (SW), neck shaft angle (NSA), and alpha angle (AA), are calculated by measuring the distances and angles between the landmarks. The dataset of hip radiographic images consisted of 428 images, with 208 men and 220 women. These images were split into training and testing sets for analysis. The deep learning network and ASM were subsequently trained on the training dataset. In the testing dataset, the automatic measurement of FNAL, FHD, FNW, SW, NSA, and AA parameters resulted in mean errors of 1.19%, 1.46%, 2.28%, 2.43%, 1.95%, and 4.53%, respectively.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"30 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Scale and Spatial Information Extraction for Kidney Tumor Segmentation: A Contextual Deformable Attention and Edge-Enhanced U-Net 用于肾脏肿瘤分割的多尺度和空间信息提取:上下文可变形注意力和边缘增强 U-Net
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-12 DOI: 10.1007/s10278-023-00900-2
Shamija Sherryl R. M. R., Jaya T.

Kidney tumor segmentation is a difficult task because of the complex spatial and volumetric information present in medical images. Recent advances in deep convolutional neural networks (DCNNs) have improved tumor segmentation accuracy. However, the practical usability of current CNN-based networks is constrained by their high computational complexity. Additionally, these techniques often struggle to make adaptive modifications based on the structure of the tumors, which can lead to blurred edges in segmentation results. A lightweight architecture called the contextual deformable attention and edge-enhanced U-Net (CDA2E-Net) for high-accuracy pixel-level kidney tumor segmentation is proposed to address these challenges. Rather than using complex deep encoders, the approach includes a lightweight depthwise dilated ShuffleNetV2 (LDS-Net) encoder integrated into the CDA2E-Net framework. The proposed method also contains a multiscale attention feature pyramid pooling (MAF2P) module that improves the ability of multiscale features to adapt to various tumor shapes. Finally, an edge-enhanced loss function is introduced to guide the CDA2E-Net to concentrate on tumor edge information. The CDA2E-Net is evaluated on the KiTS19 and KiTS21 datasets, and the results demonstrate its superiority over existing approaches in terms of Hausdorff distance (HD), intersection over union (IoU), and dice coefficient (DSC) metrics.

由于医学图像中存在复杂的空间和体积信息,因此肾脏肿瘤分割是一项艰巨的任务。深度卷积神经网络(DCNN)的最新进展提高了肿瘤分割的准确性。然而,目前基于 CNN 的网络的实际可用性受到了其高计算复杂性的限制。此外,这些技术往往难以根据肿瘤的结构进行自适应修改,从而导致分割结果中的边缘模糊不清。为了应对这些挑战,我们提出了一种用于高精度像素级肾脏肿瘤分割的轻量级架构,即上下文可变形注意力和边缘增强 U-Net(CDA2E-Net)。该方法不使用复杂的深度编码器,而是在 CDA2E-Net 框架中集成了轻量级深度扩张 ShuffleNetV2 (LDS-Net) 编码器。该方法还包含一个多尺度注意力特征金字塔池(MAF2P)模块,可提高多尺度特征适应各种肿瘤形状的能力。最后,还引入了边缘增强损失函数,引导 CDA2E-Net 专注于肿瘤边缘信息。在 KiTS19 和 KiTS21 数据集上对 CDA2E-Net 进行了评估,结果表明它在豪斯多夫距离 (HD)、交集大于联合 (IoU) 和骰子系数 (DSC) 指标方面优于现有方法。
{"title":"Multi-Scale and Spatial Information Extraction for Kidney Tumor Segmentation: A Contextual Deformable Attention and Edge-Enhanced U-Net","authors":"Shamija Sherryl R. M. R., Jaya T.","doi":"10.1007/s10278-023-00900-2","DOIUrl":"https://doi.org/10.1007/s10278-023-00900-2","url":null,"abstract":"<p>Kidney tumor segmentation is a difficult task because of the complex spatial and volumetric information present in medical images. Recent advances in deep convolutional neural networks (DCNNs) have improved tumor segmentation accuracy. However, the practical usability of current CNN-based networks is constrained by their high computational complexity. Additionally, these techniques often struggle to make adaptive modifications based on the structure of the tumors, which can lead to blurred edges in segmentation results. A lightweight architecture called the contextual deformable attention and edge-enhanced U-Net (CDA2E-Net) for high-accuracy pixel-level kidney tumor segmentation is proposed to address these challenges. Rather than using complex deep encoders, the approach includes a lightweight depthwise dilated ShuffleNetV2 (LDS-Net) encoder integrated into the CDA2E-Net framework. The proposed method also contains a multiscale attention feature pyramid pooling (MAF2P) module that improves the ability of multiscale features to adapt to various tumor shapes. Finally, an edge-enhanced loss function is introduced to guide the CDA2E-Net to concentrate on tumor edge information. The CDA2E-Net is evaluated on the KiTS19 and KiTS21 datasets, and the results demonstrate its superiority over existing approaches in terms of Hausdorff distance (HD), intersection over union (IoU), and dice coefficient (DSC) metrics.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"39 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of Spectral X-Ray Imaging for Panoramic Dental Images Based on a Simulation Framework 基于模拟框架的全景牙科图像光谱 X 射线成像评估
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-12 DOI: 10.1007/s10278-023-00940-8
Daniel Berthe, Anna Kolb, Abdulrahman Rabi, Thorsten Sellerer, Villseveri Somerkivi, Georg Constantin Feuerriegel, Andreas Philipp Sauter, Felix Meurer, York Hämisch, Tuomas Pantsar, Henrik Lohman, Daniela Pfeiffer, Franz Pfeiffer

Modern photon counting detectors allow the calculation of virtual monoenergetic or material decomposed X-ray images but are not yet used for dental panoramic radiography systems. To assess the diagnostic potential and image quality of photon counting detectors in dental panoramic radiography, ethics approval from the local ethics committee was obtained for this retrospective study. Conventional CT scans of the head and neck region were segmented into bone and soft tissue. The resulting datasets were used to calculate panoramic equivalent thickness bone and soft tissue images by forward projection, using a geometry like that of conventional panoramic radiographic systems. The panoramic equivalent thickness images were utilized to generate synthetic conventional panoramic radiographs and panoramic virtual monoenergetic radiographs at various energies. The conventional, two virtual monoenergetic images at 40 keV and 60 keV, and material-separated bone and soft tissue panoramic equivalent thickness X-ray images simulated from 17 head CTs were evaluated in a reader study involving three experienced radiologists regarding their diagnostic value and image quality. Compared to conventional panoramic radiographs, the material-separated bone panoramic equivalent thickness image exhibits a higher image quality and diagnostic value in assessing the bone structure (left(p<.001right)) and details such as teeth or root canals (left(p<.001right)). Panoramic virtual monoenergetic radiographs do not show a significant advantage over conventional panoramic radiographs. The conducted reader study shows the potential of spectral X-ray imaging for dental panoramic imaging to improve the diagnostic value and image quality.

现代光子计数探测器可以计算虚拟单能或物质分解 X 射线图像,但尚未用于牙科全景放射摄影系统。为了评估光子计数探测器在牙科全景放射摄影中的诊断潜力和图像质量,这项回顾性研究获得了当地伦理委员会的伦理批准。头颈部的常规 CT 扫描分为骨骼和软组织两部分。所得数据集通过正投影法计算出全景等效厚度的骨骼和软组织图像,几何形状与传统的全景放射摄影系统相似。利用全景等效厚度图像生成各种能量下的合成传统全景射线照片和全景虚拟单能射线照片。由三位经验丰富的放射科医生参与的读者研究评估了从 17 个头部 CT 模拟出的传统图像、40 千兆赫和 60 千兆赫的两个虚拟单能量图像以及材料分离的骨和软组织全景等效厚度 X 射线图像的诊断价值和图像质量。与传统的全景X光片相比,材料分离的骨全景等效厚度图像在评估骨结构((left(p<.001right))和牙齿或根管等细节((left(p<.001right))方面显示出更高的图像质量和诊断价值。全景虚拟单能X光片与传统的全景X光片相比并没有显示出明显的优势。这项读者研究显示了光谱 X 射线成像在牙科全景成像中提高诊断价值和图像质量的潜力。
{"title":"Evaluation of Spectral X-Ray Imaging for Panoramic Dental Images Based on a Simulation Framework","authors":"Daniel Berthe, Anna Kolb, Abdulrahman Rabi, Thorsten Sellerer, Villseveri Somerkivi, Georg Constantin Feuerriegel, Andreas Philipp Sauter, Felix Meurer, York Hämisch, Tuomas Pantsar, Henrik Lohman, Daniela Pfeiffer, Franz Pfeiffer","doi":"10.1007/s10278-023-00940-8","DOIUrl":"https://doi.org/10.1007/s10278-023-00940-8","url":null,"abstract":"<p>Modern photon counting detectors allow the calculation of virtual monoenergetic or material decomposed X-ray images but are not yet used for dental panoramic radiography systems. To assess the diagnostic potential and image quality of photon counting detectors in dental panoramic radiography, ethics approval from the local ethics committee was obtained for this retrospective study. Conventional CT scans of the head and neck region were segmented into bone and soft tissue. The resulting datasets were used to calculate panoramic equivalent thickness bone and soft tissue images by forward projection, using a geometry like that of conventional panoramic radiographic systems. The panoramic equivalent thickness images were utilized to generate synthetic conventional panoramic radiographs and panoramic virtual monoenergetic radiographs at various energies. The conventional, two virtual monoenergetic images at 40 keV and 60 keV, and material-separated bone and soft tissue panoramic equivalent thickness X-ray images simulated from 17 head CTs were evaluated in a reader study involving three experienced radiologists regarding their diagnostic value and image quality. Compared to conventional panoramic radiographs, the material-separated bone panoramic equivalent thickness image exhibits a higher image quality and diagnostic value in assessing the bone structure <span>(left(p&lt;.001right))</span> and details such as teeth or root canals <span>(left(p&lt;.001right))</span>. Panoramic virtual monoenergetic radiographs do not show a significant advantage over conventional panoramic radiographs. The conducted reader study shows the potential of spectral X-ray imaging for dental panoramic imaging to improve the diagnostic value and image quality.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"83 2 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Disease Classification with Deep Learning: a Two-Stage Optimization Approach for Monkeypox and Similar Skin Lesion Diseases 利用深度学习加强疾病分类:针对猴痘和类似皮肤病的两阶段优化方法
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-12 DOI: 10.1007/s10278-023-00941-7
Serkan Savaş

Monkeypox (MPox) is an infectious disease caused by the monkeypox virus, presenting challenges in accurate identification due to its resemblance to other diseases. This study introduces a deep learning-based method to distinguish visually similar diseases, specifically MPox, chickenpox, and measles, addressing the 2022 global MPox outbreak. A two-stage optimization approach was presented in the study. By analyzing pre-trained deep neural networks including 71 models, this study optimizes accuracy through transfer learning, fine-tuning, and ensemble learning techniques. ConvNeXtBase, Large, and XLarge models were identified achieving 97.5% accuracy in the first stage. Afterwards, some selection criteria were followed for the models identified in the first stage for use in ensemble learning technique within the optimization approach. The top-performing ensemble model, EM3 (composed of RegNetX160, ResNetRS101, and ResNet101), attains an AUC of 0.9971 in the second stage. Evaluation on unseen data ensures model robustness and enhances the study’s overall validity and reliability. The design and implementation of the study have been optimized to address the limitations identified in the literature. This approach offers a rapid and highly accurate decision support system for timely MPox diagnosis, reducing human error, manual processes, and enhancing clinic efficiency. It aids in early MPox detection, addresses diverse disease challenges, and informs imaging device software development. The study’s broad implications support global health efforts and showcase artificial intelligence potential in medical informatics for disease identification and diagnosis.

猴痘(MPox)是由猴痘病毒引起的一种传染病,由于与其他疾病相似,因此在准确识别方面存在挑战。本研究介绍了一种基于深度学习的方法来区分视觉上相似的疾病,特别是猴痘、水痘和麻疹,以应对 2022 年全球猴痘的爆发。研究中提出了一种两阶段优化方法。通过分析包括 71 个模型在内的预训练深度神经网络,该研究通过迁移学习、微调和集合学习技术优化了准确性。在第一阶段,ConvNeXtBase、Large 和 XLarge 模型的准确率达到了 97.5%。随后,对第一阶段确定的模型遵循了一些选择标准,以便在优化方法中使用集合学习技术。表现最好的集合模型 EM3(由 RegNetX160、ResNetRS101 和 ResNet101 组成)在第二阶段的 AUC 达到 0.9971。对未见数据的评估确保了模型的稳健性,提高了研究的整体有效性和可靠性。针对文献中指出的局限性,对研究的设计和实施进行了优化。这种方法为及时诊断 MPox 提供了一个快速、高度准确的决策支持系统,减少了人为错误和手工操作,提高了诊所的效率。它有助于早期 MPox 检测,应对各种疾病挑战,并为成像设备软件开发提供信息。这项研究的广泛影响支持了全球卫生工作,并展示了人工智能在医学信息学中用于疾病识别和诊断的潜力。
{"title":"Enhancing Disease Classification with Deep Learning: a Two-Stage Optimization Approach for Monkeypox and Similar Skin Lesion Diseases","authors":"Serkan Savaş","doi":"10.1007/s10278-023-00941-7","DOIUrl":"https://doi.org/10.1007/s10278-023-00941-7","url":null,"abstract":"<p>Monkeypox (MPox) is an infectious disease caused by the monkeypox virus, presenting challenges in accurate identification due to its resemblance to other diseases. This study introduces a deep learning-based method to distinguish visually similar diseases, specifically MPox, chickenpox, and measles, addressing the 2022 global MPox outbreak. A two-stage optimization approach was presented in the study. By analyzing pre-trained deep neural networks including 71 models, this study optimizes accuracy through transfer learning, fine-tuning, and ensemble learning techniques. ConvNeXtBase, Large, and XLarge models were identified achieving 97.5% accuracy in the first stage. Afterwards, some selection criteria were followed for the models identified in the first stage for use in ensemble learning technique within the optimization approach. The top-performing ensemble model, EM3 (composed of RegNetX160, ResNetRS101, and ResNet101), attains an AUC of 0.9971 in the second stage. Evaluation on unseen data ensures model robustness and enhances the study’s overall validity and reliability. The design and implementation of the study have been optimized to address the limitations identified in the literature. This approach offers a rapid and highly accurate decision support system for timely MPox diagnosis, reducing human error, manual processes, and enhancing clinic efficiency. It aids in early MPox detection, addresses diverse disease challenges, and informs imaging device software development. The study’s broad implications support global health efforts and showcase artificial intelligence potential in medical informatics for disease identification and diagnosis.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"95 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MRI-Based Machine Learning Fusion Models to Distinguish Encephalitis and Gliomas 基于磁共振成像的机器学习融合模型区分脑炎和胶质瘤
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-12 DOI: 10.1007/s10278-023-00957-z
Fei Zheng, Ping Yin, Li Yang, Yujian Wang, Wenhan Hao, Qi Hao, Xuzhu Chen, Nan Hong

This paper aims to compare the performance of the classical machine learning (CML) model and the deep learning (DL) model, and to assess the effectiveness of utilizing fusion radiomics from both CML and DL in distinguishing encephalitis from glioma in atypical cases. We analysed the axial FLAIR images of preoperative MRI in 116 patients pathologically confirmed as gliomas and clinically diagnosed with encephalitis. The 3 CML models (logistic regression (LR), support vector machine (SVM) and multi-layer perceptron (MLP)), 3 DL models (DenseNet 121, ResNet 50 and ResNet 18) and a deep learning radiomic (DLR) model were established, respectively. The area under the receiver operating curve (AUC) and sensitivity, specificity, accuracy, negative predictive value (NPV) and positive predictive value (PPV) were calculated for the training and validation sets. In addition, a deep learning radiomic nomogram (DLRN) and a web calculator were designed as a tool to aid clinical decision-making. The best DL model (ResNet50) consistently outperformed the best CML model (LR). The DLR model had the best predictive performance, with AUC, sensitivity, specificity, accuracy, NPV and PPV of 0.879, 0.929, 0.800, 0.875, 0.867 and 0.889 in the validation sets, respectively. Calibration curve of DLR model shows good agreement between prediction and observation, and the decision curve analysis (DCA) indicated that the DLR model had higher overall net benefit than the other two models (ResNet50 and LR). Meanwhile, the DLRN and web calculator can provide dynamic assessments. Machine learning (ML) models have the potential to non-invasively differentiate between encephalitis and glioma in atypical cases. Furthermore, combining DL and CML techniques could enhance the performance of the ML models.

本文旨在比较经典机器学习(CML)模型和深度学习(DL)模型的性能,并评估在非典型病例中利用 CML 和 DL 的融合放射组学来区分脑炎和胶质瘤的有效性。我们分析了 116 例经病理证实为胶质瘤、临床诊断为脑炎的患者术前 MRI 的轴向 FLAIR 图像。分别建立了 3 个 CML 模型(逻辑回归(LR)、支持向量机(SVM)和多层感知器(MLP))、3 个 DL 模型(DenseNet 121、ResNet 50 和 ResNet 18)和一个深度学习放射学模型(DLR)。计算了训练集和验证集的接收者操作曲线下面积(AUC)以及灵敏度、特异性、准确度、阴性预测值(NPV)和阳性预测值(PPV)。此外,还设计了深度学习放射学提名图(DLRN)和网络计算器,作为辅助临床决策的工具。最佳 DL 模型(ResNet50)的表现始终优于最佳 CML 模型(LR)。DLR 模型的预测性能最好,在验证集中的 AUC、灵敏度、特异性、准确性、NPV 和 PPV 分别为 0.879、0.929、0.800、0.875、0.867 和 0.889。DLR 模型的校准曲线显示预测结果与观测结果之间具有良好的一致性,决策曲线分析(DCA)表明 DLR 模型的总体净效益高于其他两个模型(ResNet50 和 LR)。同时,DLRN 和网络计算器可以提供动态评估。机器学习(ML)模型有可能在非典型病例中非侵入性地区分脑炎和胶质瘤。此外,结合 DL 和 CML 技术可以提高 ML 模型的性能。
{"title":"MRI-Based Machine Learning Fusion Models to Distinguish Encephalitis and Gliomas","authors":"Fei Zheng, Ping Yin, Li Yang, Yujian Wang, Wenhan Hao, Qi Hao, Xuzhu Chen, Nan Hong","doi":"10.1007/s10278-023-00957-z","DOIUrl":"https://doi.org/10.1007/s10278-023-00957-z","url":null,"abstract":"<p>This paper aims to compare the performance of the classical machine learning (CML) model and the deep learning (DL) model, and to assess the effectiveness of utilizing fusion radiomics from both CML and DL in distinguishing encephalitis from glioma in atypical cases. We analysed the axial FLAIR images of preoperative MRI in 116 patients pathologically confirmed as gliomas and clinically diagnosed with encephalitis. The 3 CML models (logistic regression (LR), support vector machine (SVM) and multi-layer perceptron (MLP)), 3 DL models (DenseNet 121, ResNet 50 and ResNet 18) and a deep learning radiomic (DLR) model were established, respectively. The area under the receiver operating curve (AUC) and sensitivity, specificity, accuracy, negative predictive value (NPV) and positive predictive value (PPV) were calculated for the training and validation sets. In addition, a deep learning radiomic nomogram (DLRN) and a web calculator were designed as a tool to aid clinical decision-making. The best DL model (ResNet50) consistently outperformed the best CML model (LR). The DLR model had the best predictive performance, with AUC, sensitivity, specificity, accuracy, NPV and PPV of 0.879, 0.929, 0.800, 0.875, 0.867 and 0.889 in the validation sets, respectively. Calibration curve of DLR model shows good agreement between prediction and observation, and the decision curve analysis (DCA) indicated that the DLR model had higher overall net benefit than the other two models (ResNet50 and LR). Meanwhile, the DLRN and web calculator can provide dynamic assessments. Machine learning (ML) models have the potential to non-invasively differentiate between encephalitis and glioma in atypical cases. Furthermore, combining DL and CML techniques could enhance the performance of the ML models.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"19 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Automated Decision Support System to Analyze Malignancy Patterns of Breast Masses Employing Medically Relevant Features of Ultrasound Images 利用超声图像的医学相关特征分析乳腺肿块恶性模式的自动决策支持系统
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-12 DOI: 10.1007/s10278-023-00925-7
Sami Azam, Sidratul Montaha, Mohaimenul Azam Khan Raiaan, A. K. M. Rakibul Haque Rafid, Saddam Hossain Mukta, Mirjam Jonkman

An automated computer-aided approach might aid radiologists in diagnosing breast cancer at a primary stage. This study proposes a novel decision support system to classify breast tumors into benign and malignant based on clinically important features, using ultrasound images. Nine handcrafted features, which align with the clinical markers used by radiologists, are extracted from the region of interest (ROI) of ultrasound images. To validate that these elected clinical markers have a significant impact on predicting the benign and malignant classes, ten machine learning (ML) models are experimented with resulting in test accuracies in the range of 96 to 99%. In addition, four feature selection techniques are explored where two features are eliminated according to the feature ranking score of each feature selection method. The Random Forest classifier is trained with the resultant four feature sets. Results indicate that even when eliminating only two features, the performance of the model is reduced for each feature selection technique. These experiments validate the efficiency and effectiveness of the clinically important features. To develop the decision support system, a probability density function (PDF) graph is generated for each feature in order to find a threshold range to distinguish benign and malignant tumors. Based on the threshold range of particular features, a decision support system is developed in such a way that if at least eight out of nine features are within the threshold range, the image will be denoted as true predicted. With this algorithm, a test accuracy of 99.38% and an F1 Score of 99.05% is achieved, which means that our decision support system outperforms all the previously trained ML models. Moreover, after calculating individual class-based test accuracies, for the benign class, a test accuracy of 99.31% has been attained where only three benign instances are misclassified out of 437 instances, and for the malignant class, a test accuracy of 99.52% has been attained where only one malignant instance is misclassified out of 210 instances. This system is robust, time-effective, and reliable as the radiologists’ criteria are followed and may aid specialists in making a diagnosis.

自动计算机辅助方法可帮助放射科医生在乳腺癌初期阶段进行诊断。本研究提出了一种新颖的决策支持系统,利用超声图像根据临床重要特征将乳腺肿瘤分为良性和恶性。从超声图像的感兴趣区(ROI)中提取了九个手工制作的特征,这些特征与放射科医生使用的临床标记一致。为了验证这些选定的临床标记对预测良性和恶性类别有重大影响,对十个机器学习(ML)模型进行了实验,结果测试准确率在 96% 到 99% 之间。此外,还探索了四种特征选择技术,根据每种特征选择方法的特征排序得分剔除两个特征。随机森林分类器使用由此产生的四个特征集进行训练。结果表明,即使只去掉两个特征,每种特征选择技术的模型性能都会降低。这些实验验证了临床重要特征的效率和有效性。为了开发决策支持系统,我们为每个特征生成了概率密度函数(PDF)图,以找到区分良性肿瘤和恶性肿瘤的阈值范围。根据特定特征的阈值范围,开发出一种决策支持系统,即如果九个特征中至少有八个在阈值范围内,则图像将被视为真正的预测图像。通过这种算法,测试准确率达到 99.38%,F1 分数达到 99.05%,这意味着我们的决策支持系统优于之前训练的所有 ML 模型。此外,在计算基于单个类别的测试准确率后,良性类别的测试准确率达到 99.31%,437 个良性实例中只有 3 个被误分类;恶性类别的测试准确率达到 99.52%,210 个恶性实例中只有 1 个被误分类。由于遵循了放射科医生的标准,该系统具有稳健、省时和可靠的特点,可帮助专家做出诊断。
{"title":"An Automated Decision Support System to Analyze Malignancy Patterns of Breast Masses Employing Medically Relevant Features of Ultrasound Images","authors":"Sami Azam, Sidratul Montaha, Mohaimenul Azam Khan Raiaan, A. K. M. Rakibul Haque Rafid, Saddam Hossain Mukta, Mirjam Jonkman","doi":"10.1007/s10278-023-00925-7","DOIUrl":"https://doi.org/10.1007/s10278-023-00925-7","url":null,"abstract":"<p>An automated computer-aided approach might aid radiologists in diagnosing breast cancer at a primary stage. This study proposes a novel decision support system to classify breast tumors into benign and malignant based on clinically important features, using ultrasound images. Nine handcrafted features, which align with the clinical markers used by radiologists, are extracted from the region of interest (ROI) of ultrasound images. To validate that these elected clinical markers have a significant impact on predicting the benign and malignant classes, ten machine learning (ML) models are experimented with resulting in test accuracies in the range of 96 to 99%. In addition, four feature selection techniques are explored where two features are eliminated according to the feature ranking score of each feature selection method. The Random Forest classifier is trained with the resultant four feature sets. Results indicate that even when eliminating only two features, the performance of the model is reduced for each feature selection technique. These experiments validate the efficiency and effectiveness of the clinically important features. To develop the decision support system, a probability density function (PDF) graph is generated for each feature in order to find a threshold range to distinguish benign and malignant tumors. Based on the threshold range of particular features, a decision support system is developed in such a way that if at least eight out of nine features are within the threshold range, the image will be denoted as true predicted. With this algorithm, a test accuracy of 99.38% and an F1 Score of 99.05% is achieved, which means that our decision support system outperforms all the previously trained ML models. Moreover, after calculating individual class-based test accuracies, for the benign class, a test accuracy of 99.31% has been attained where only three benign instances are misclassified out of 437 instances, and for the malignant class, a test accuracy of 99.52% has been attained where only one malignant instance is misclassified out of 210 instances. This system is robust, time-effective, and reliable as the radiologists’ criteria are followed and may aid specialists in making a diagnosis.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"52 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of Validation Performance of a Machine Learning Classifier in Interstitial Lung Disease Cases Without Definite or Probable Usual Interstitial Pneumonia Pattern on CT Using Clinical and Pathology-Supported Diagnostic Labels 使用临床和病理学支持的诊断标签,分析机器学习分类器在 CT 上无明确或可能的通常间质性肺炎模式的间质性肺病病例中的验证性能
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-11 DOI: 10.1007/s10278-023-00914-w
Marcello Chang, Joshua J. Reicher, Angad Kalra, Michael Muelly, Yousef Ahmad

We previously validated Fibresolve, a machine learning classifier system that non-invasively predicts idiopathic pulmonary fibrosis (IPF) diagnosis. The system incorporates an automated deep learning algorithm that analyzes chest computed tomography (CT) imaging to assess for features associated with idiopathic pulmonary fibrosis. Here, we assess performance in assessment of patterns beyond those that are characteristic features of usual interstitial pneumonia (UIP) pattern. The machine learning classifier was previously developed and validated using standard training, validation, and test sets, with clinical plus pathologically determined ground truth. The multi-site 295-patient validation dataset was used for focused subgroup analysis in this investigation to evaluate the classifier’s performance range in cases with and without radiologic UIP and probable UIP designations. Radiologic assessment of specific features for UIP including the presence and distribution of reticulation, ground glass, bronchiectasis, and honeycombing was used for assignment of radiologic pattern. Output from the classifier was assessed within various UIP subgroups. The machine learning classifier was able to classify cases not meeting the criteria for UIP or probable UIP as IPF with estimated sensitivity of 56–65% and estimated specificity of 92–94%. Example cases demonstrated non-basilar-predominant as well as ground glass patterns that were indeterminate for UIP by subjective imaging criteria but for which the classifier system was able to correctly identify the case as IPF as confirmed by multidisciplinary discussion generally inclusive of histopathology. The machine learning classifier Fibresolve may be helpful in the diagnosis of IPF in cases without radiological UIP and probable UIP patterns.

我们以前验证过一种机器学习分类系统 Fibresolve,它可以无创预测特发性肺纤维化 (IPF) 的诊断。该系统采用自动深度学习算法,通过分析胸部计算机断层扫描(CT)成像来评估与特发性肺纤维化相关的特征。在此,我们评估了对通常间质性肺炎(UIP)模式特征之外的模式进行评估的性能。机器学习分类器之前已开发完成,并使用标准训练集、验证集和测试集进行了验证,其中包括临床和病理确定的基本事实。在本次调查中,多站点 295 例患者验证数据集被用于重点亚组分析,以评估分类器在有和无放射学 UIP 及可能 UIP 指征病例中的性能范围。对 UIP 特定特征的放射学评估,包括网状结构、磨玻璃、支气管扩张和蜂窝状结构的存在和分布,被用于放射学模式的分配。分类器的输出结果在不同的 UIP 分组中进行评估。机器学习分类器能将不符合 UIP 或可能 UIP 标准的病例分类为 IPF,灵敏度估计为 56-65%,特异度估计为 92-94%。示例病例显示了非基底动脉占主导地位以及磨玻璃模式,这些病例通过主观成像标准无法确定是否为 UIP,但分类系统却能将其正确识别为 IPF,并通过多学科讨论(一般包括组织病理学)予以确认。机器学习分类器Fibresolve可能有助于诊断无放射学UIP和可能有UIP模式的IPF病例。
{"title":"Analysis of Validation Performance of a Machine Learning Classifier in Interstitial Lung Disease Cases Without Definite or Probable Usual Interstitial Pneumonia Pattern on CT Using Clinical and Pathology-Supported Diagnostic Labels","authors":"Marcello Chang, Joshua J. Reicher, Angad Kalra, Michael Muelly, Yousef Ahmad","doi":"10.1007/s10278-023-00914-w","DOIUrl":"https://doi.org/10.1007/s10278-023-00914-w","url":null,"abstract":"<p>We previously validated Fibresolve, a machine learning classifier system that non-invasively predicts idiopathic pulmonary fibrosis (IPF) diagnosis. The system incorporates an automated deep learning algorithm that analyzes chest computed tomography (CT) imaging to assess for features associated with idiopathic pulmonary fibrosis. Here, we assess performance in assessment of patterns beyond those that are characteristic features of usual interstitial pneumonia (UIP) pattern. The machine learning classifier was previously developed and validated using standard training, validation, and test sets, with clinical plus pathologically determined ground truth. The multi-site 295-patient validation dataset was used for focused subgroup analysis in this investigation to evaluate the classifier’s performance range in cases with and without radiologic UIP and probable UIP designations. Radiologic assessment of specific features for UIP including the presence and distribution of reticulation, ground glass, bronchiectasis, and honeycombing was used for assignment of radiologic pattern. Output from the classifier was assessed within various UIP subgroups. The machine learning classifier was able to classify cases not meeting the criteria for UIP or probable UIP as IPF with estimated sensitivity of 56–65% and estimated specificity of 92–94%. Example cases demonstrated non-basilar-predominant as well as ground glass patterns that were indeterminate for UIP by subjective imaging criteria but for which the classifier system was able to correctly identify the case as IPF as confirmed by multidisciplinary discussion generally inclusive of histopathology. The machine learning classifier Fibresolve may be helpful in the diagnosis of IPF in cases without radiological UIP and probable UIP patterns.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"7 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pulmonary Nodule Classification Using a Multiview Residual Selective Kernel Network 使用多视角残差选择核网络进行肺结节分类
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-11 DOI: 10.1007/s10278-023-00928-4
Herng-Hua Chang, Cheng-Zhe Wu, Audrey Haihong Gallogly

Lung cancer is one of the leading causes of death worldwide and early detection is crucial to reduce the mortality. A reliable computer-aided diagnosis (CAD) system can help facilitate early detection of malignant nodules. Although existing methods provide adequate classification accuracy, there is still room for further improvement. This study is dedicated to investigating a new CAD scheme for predicting the malignant likelihood of lung nodules in computed tomography (CT) images in light of a deep learning strategy. Conceived from the residual learning and selective kernel, we investigated an efficient residual selective kernel (RSK) block to handle the diversity of lung nodules with various shapes and obscure structures. Founded on this RSK block, we established a multiview RSK network (MRSKNet), to which three anatomical planes in the axial, coronal, and sagittal directions were fed. To reinforce the classification efficiency, seven handcrafted texture features with a filter-like computation strategy were explored, among which the homogeneity (HOM) feature maps are combined with the corresponding intensity CT images for concatenation input, leading to an improved network architecture. Evaluated on the public benchmark Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) challenge database with ten-fold cross validation of binary classification, our experimental results indicated high area under receiver operating characteristic (AUC) and accuracy scores. A better compromise between recall and specificity was struck using the suggested concatenation strategy comparing to many state-of-the-art approaches. The proposed pulmonary nodule classification framework exhibited great efficacy and achieved a higher AUC of 0.9711. The association of handcrafted texture features with deep learning models is promising in advancing the classification performance. The developed pulmonary nodule CAD network architecture is of potential in facilitating the diagnosis of lung cancer for further image processing applications.

肺癌是导致全球死亡的主要原因之一,早期检测对降低死亡率至关重要。可靠的计算机辅助诊断(CAD)系统有助于及早发现恶性结节。虽然现有的方法能提供足够的分类准确性,但仍有进一步改进的空间。本研究致力于研究一种新的计算机辅助诊断方案,以深度学习策略为基础,预测计算机断层扫描(CT)图像中肺部结节的恶性可能性。在残差学习和选择性核的基础上,我们研究了一种高效的残差选择性核(RSK)块,以处理形状各异、结构模糊的肺结节。在此 RSK 块的基础上,我们建立了一个多视角 RSK 网络(MRSKNet),向其输入轴向、冠状和矢状三个解剖平面。为了提高分类效率,我们探索了七种采用滤波式计算策略的手工纹理特征,其中的同质性(HOM)特征图与相应的强度 CT 图像相结合,作为连接输入,从而改进了网络结构。我们的实验结果表明,接收器操作特征下面积(AUC)和准确率得分都很高,在公共基准肺图像数据库联盟和图像数据库资源倡议(LIDC-IDRI)挑战数据库上进行了评估,并对二元分类进行了十倍交叉验证。与许多最先进的方法相比,建议的连接策略在召回率和特异性之间实现了更好的折中。所提出的肺结节分类框架显示出巨大的功效,AUC 达到 0.9711。手工制作的纹理特征与深度学习模型的结合有望提高分类性能。所开发的肺结节 CAD 网络架构具有促进肺癌诊断的潜力,可用于进一步的图像处理应用。
{"title":"Pulmonary Nodule Classification Using a Multiview Residual Selective Kernel Network","authors":"Herng-Hua Chang, Cheng-Zhe Wu, Audrey Haihong Gallogly","doi":"10.1007/s10278-023-00928-4","DOIUrl":"https://doi.org/10.1007/s10278-023-00928-4","url":null,"abstract":"<p>Lung cancer is one of the leading causes of death worldwide and early detection is crucial to reduce the mortality. A reliable computer-aided diagnosis (CAD) system can help facilitate early detection of malignant nodules. Although existing methods provide adequate classification accuracy, there is still room for further improvement. This study is dedicated to investigating a new CAD scheme for predicting the malignant likelihood of lung nodules in computed tomography (CT) images in light of a deep learning strategy. Conceived from the residual learning and selective kernel, we investigated an efficient residual selective kernel (RSK) block to handle the diversity of lung nodules with various shapes and obscure structures. Founded on this RSK block, we established a multiview RSK network (MRSKNet), to which three anatomical planes in the axial, coronal, and sagittal directions were fed. To reinforce the classification efficiency, seven handcrafted texture features with a filter-like computation strategy were explored, among which the homogeneity (HOM) feature maps are combined with the corresponding intensity CT images for concatenation input, leading to an improved network architecture. Evaluated on the public benchmark Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) challenge database with ten-fold cross validation of binary classification, our experimental results indicated high area under receiver operating characteristic (AUC) and accuracy scores. A better compromise between recall and specificity was struck using the suggested concatenation strategy comparing to many state-of-the-art approaches. The proposed pulmonary nodule classification framework exhibited great efficacy and achieved a higher AUC of 0.9711. The association of handcrafted texture features with deep learning models is promising in advancing the classification performance. The developed pulmonary nodule CAD network architecture is of potential in facilitating the diagnosis of lung cancer for further image processing applications.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"4 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning–Assisted Identification of Femoroacetabular Impingement (FAI) on Routine Pelvic Radiographs 深度学习辅助识别常规骨盆 X 光片上的股骨髋臼撞击症 (FAI)
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-11 DOI: 10.1007/s10278-023-00920-y
Michael K. Hoy, Vishal Desai, Simukayi Mutasa, Robert C. Hoy, Richard Gorniak, Jeffrey A. Belair

To use a novel deep learning system to localize the hip joints and detect findings of cam-type femoroacetabular impingement (FAI). A retrospective search of hip/pelvis radiographs obtained in patients to evaluate for FAI yielded 3050 total studies. Each hip was classified separately by the original interpreting radiologist in the following manner: 724 hips had severe cam-type FAI morphology, 962 moderate cam-type FAI morphology, 846 mild cam-type FAI morphology, and 518 hips were normal. The anteroposterior (AP) view from each study was anonymized and extracted. After localization of the hip joints by a novel convolutional neural network (CNN) based on the focal loss principle, a second CNN classified the images of the hip as cam positive, or no FAI. Accuracy was 74% for diagnosing normal vs. abnormal cam-type FAI morphology, with aggregate sensitivity and specificity of 0.821 and 0.669, respectively, at the chosen operating point. The aggregate AUC was 0.736. A deep learning system can be applied to detect FAI-related changes on single view pelvic radiographs. Deep learning is useful for quickly identifying and categorizing pathology on imaging, which may aid the interpreting radiologist.

使用新型深度学习系统定位髋关节并检测凸轮型股骨髋臼撞击症(FAI)的发现。对患者为评估 FAI 而获得的髋关节/骨盆 X 光片进行回顾性检索,共获得 3050 项研究结果。每个髋关节都由最初的放射科医生按以下方式进行了分类:724个髋关节有重度凸轮型FAI形态,962个髋关节有中度凸轮型FAI形态,846个髋关节有轻度凸轮型FAI形态,518个髋关节正常。每项研究的前胸(AP)切面都经过匿名处理和提取。基于焦点损失原理的新型卷积神经网络(CNN)对髋关节进行定位后,第二个 CNN 将髋关节图像分为凸轮阳性或无 FAI 两类。在所选操作点上,诊断正常与异常凸轮型 FAI 形态的准确率为 74%,总灵敏度和特异度分别为 0.821 和 0.669。总的 AUC 为 0.736。深度学习系统可用于检测单视角骨盆X光片上与FAI相关的变化。深度学习有助于快速识别成像上的病理变化并对其进行分类,从而为放射科医生提供帮助。
{"title":"Deep Learning–Assisted Identification of Femoroacetabular Impingement (FAI) on Routine Pelvic Radiographs","authors":"Michael K. Hoy, Vishal Desai, Simukayi Mutasa, Robert C. Hoy, Richard Gorniak, Jeffrey A. Belair","doi":"10.1007/s10278-023-00920-y","DOIUrl":"https://doi.org/10.1007/s10278-023-00920-y","url":null,"abstract":"<p>To use a novel deep learning system to localize the hip joints and detect findings of cam-type femoroacetabular impingement (FAI). A retrospective search of hip/pelvis radiographs obtained in patients to evaluate for FAI yielded 3050 total studies. Each hip was classified separately by the original interpreting radiologist in the following manner: 724 hips had severe cam-type FAI morphology, 962 moderate cam-type FAI morphology, 846 mild cam-type FAI morphology, and 518 hips were normal. The anteroposterior (AP) view from each study was anonymized and extracted. After localization of the hip joints by a novel convolutional neural network (CNN) based on the focal loss principle, a second CNN classified the images of the hip as cam positive, or no FAI. Accuracy was 74% for diagnosing normal vs. abnormal cam-type FAI morphology, with aggregate sensitivity and specificity of 0.821 and 0.669, respectively, at the chosen operating point. The aggregate AUC was 0.736. A deep learning system can be applied to detect FAI-related changes on single view pelvic radiographs. Deep learning is useful for quickly identifying and categorizing pathology on imaging, which may aid the interpreting radiologist.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"80 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Digital Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1