首页 > 最新文献

Journal of Digital Imaging最新文献

英文 中文
Exploring the Low-Dose Limit for Focal Hepatic Lesion Detection with a Deep Learning-Based CT Reconstruction Algorithm: A Simulation Study on Patient Images 利用基于深度学习的 CT 重建算法探索肝脏病灶检测的低剂量极限:患者图像模拟研究
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-03-19 DOI: 10.1007/s10278-024-01080-3
Yongchun You, Sihua Zhong, Guozhi Zhang, Yuting Wen, Dian Guo, Wanjiang Li, Zhenlin Li

This study aims to investigate the maximum achievable dose reduction for applying a new deep learning-based reconstruction algorithm, namely the artificial intelligence iterative reconstruction (AIIR), in computed tomography (CT) for hepatic lesion detection. A total of 40 patients with 98 clinically confirmed hepatic lesions were retrospectively included. The mean volume CT dose index was 13.66 ± 1.73 mGy in routine-dose portal venous CT examinations, where the images were originally obtained with hybrid iterative reconstruction (HIR). Low-dose simulations were performed in projection domain for 40%-, 20%-, and 10%-dose levels, followed by reconstruction using both HIR and AIIR. Two radiologists were asked to detect hepatic lesion on each set of low-dose image in separate sessions. Qualitative metrics including lesion conspicuity, diagnostic confidence, and overall image quality were evaluated using a 5-point scale. The contrast-to-noise ratio (CNR) for lesion was also calculated for quantitative assessment. The lesion CNR on AIIR at reduced doses were significantly higher than that on routine-dose HIR (all p < 0.05). Lower qualitative image quality was observed as the radiation dose reduced, while there were no significant differences between 40%-dose AIIR and routine-dose HIR images. The lesion detection rate was 100%, 98% (96/98), and 73.5% (72/98) on 40%-, 20%-, and 10%-dose AIIR, respectively, whereas it was 98% (96/98), 73.5% (72/98), and 40% (39/98) on the corresponding low-dose HIR, respectively. AIIR outperformed HIR in simulated low-dose CT examinations of the liver. The use of AIIR allows up to 60% dose reduction for lesion detection while maintaining comparable image quality to routine-dose HIR.

本研究旨在探讨在计算机断层扫描(CT)中应用基于深度学习的新型重建算法(即人工智能迭代重建(AIIR))检测肝脏病变时可实现的最大剂量降低。该研究共回顾性地纳入了 40 例患者,其中有 98 例经临床确诊的肝脏病变。常规剂量门静脉 CT 检查的平均容积 CT 剂量指数为 13.66 ± 1.73 mGy,这些图像最初是通过混合迭代重建(HIR)获得的。在投影域对 40%、20% 和 10% 剂量水平进行了低剂量模拟,然后使用 HIR 和 AIIR 进行重建。两名放射科医生被要求分别在每组低剂量图像上检测肝脏病变。定性指标包括病变的清晰度、诊断信心和整体图像质量,采用 5 级评分法进行评估。此外,还计算了病变的对比噪声比(CNR),以进行定量评估。减小剂量的 AIIR 的病变 CNR 明显高于常规剂量的 HIR(所有 p 均为 0.05)。随着辐射剂量的降低,图像质量也有所下降,而 40% 剂量的 AIIR 和常规剂量的 HIR 图像之间没有明显差异。40%、20%和10%剂量的AIIR的病灶检出率分别为100%、98%(96/98)和73.5%(72/98),而相应的低剂量HIR的病灶检出率分别为98%(96/98)、73.5%(72/98)和40%(39/98)。在模拟低剂量肝脏 CT 检查中,AIIR 的表现优于 HIR。使用 AIIR 可以减少高达 60% 的病变检测剂量,同时保持与常规剂量 HIR 相当的图像质量。
{"title":"Exploring the Low-Dose Limit for Focal Hepatic Lesion Detection with a Deep Learning-Based CT Reconstruction Algorithm: A Simulation Study on Patient Images","authors":"Yongchun You, Sihua Zhong, Guozhi Zhang, Yuting Wen, Dian Guo, Wanjiang Li, Zhenlin Li","doi":"10.1007/s10278-024-01080-3","DOIUrl":"https://doi.org/10.1007/s10278-024-01080-3","url":null,"abstract":"<p>This study aims to investigate the maximum achievable dose reduction for applying a new deep learning-based reconstruction algorithm, namely the artificial intelligence iterative reconstruction (AIIR), in computed tomography (CT) for hepatic lesion detection. A total of 40 patients with 98 clinically confirmed hepatic lesions were retrospectively included. The mean volume CT dose index was 13.66 ± 1.73 mGy in routine-dose portal venous CT examinations, where the images were originally obtained with hybrid iterative reconstruction (HIR). Low-dose simulations were performed in projection domain for 40%-, 20%-, and 10%-dose levels, followed by reconstruction using both HIR and AIIR. Two radiologists were asked to detect hepatic lesion on each set of low-dose image in separate sessions. Qualitative metrics including lesion conspicuity, diagnostic confidence, and overall image quality were evaluated using a 5-point scale. The contrast-to-noise ratio (CNR) for lesion was also calculated for quantitative assessment. The lesion CNR on AIIR at reduced doses were significantly higher than that on routine-dose HIR (all <i>p</i> &lt; 0.05). Lower qualitative image quality was observed as the radiation dose reduced, while there were no significant differences between 40%-dose AIIR and routine-dose HIR images. The lesion detection rate was 100%, 98% (96/98), and 73.5% (72/98) on 40%-, 20%-, and 10%-dose AIIR, respectively, whereas it was 98% (96/98), 73.5% (72/98), and 40% (39/98) on the corresponding low-dose HIR, respectively. AIIR outperformed HIR in simulated low-dose CT examinations of the liver. The use of AIIR allows up to 60% dose reduction for lesion detection while maintaining comparable image quality to routine-dose HIR.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"46 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140168349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auto-segmentation of Adult-Type Diffuse Gliomas: Comparison of Transfer Learning-Based Convolutional Neural Network Model vs. Radiologists 成人型弥漫性胶质瘤的自动分割:基于迁移学习的卷积神经网络模型与放射医师的比较
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-02-21 DOI: 10.1007/s10278-024-01044-7

Abstract

Segmentation of glioma is crucial for quantitative brain tumor assessment, to guide therapeutic research and clinical management, but very time-consuming. Fully automated tools for the segmentation of multi-sequence MRI are needed. We developed and pretrained a deep learning (DL) model using publicly available datasets A (n = 210) and B (n = 369) containing FLAIR, T2WI, and contrast-enhanced (CE)-T1WI. This was then fine-tuned with our institutional dataset (n = 197) containing ADC, T2WI, and CE-T1WI, manually annotated by radiologists, and split into training (n = 100) and testing (n = 97) sets. The Dice similarity coefficient (DSC) was used to compare model outputs and manual labels. A third independent radiologist assessed segmentation quality on a semi-quantitative 5-scale score. Differences in DSC between new and recurrent gliomas, and between uni or multifocal gliomas were analyzed using the Mann–Whitney test. Semi-quantitative analyses were compared using the chi-square test. We found that there was good agreement between segmentations from the fine-tuned DL model and ground truth manual segmentations (median DSC: 0.729, std-dev: 0.134). DSC was higher for newly diagnosed (0.807) than recurrent (0.698) (p < 0.001), and higher for unifocal (0.747) than multi-focal (0.613) cases (p = 0.001). Semi-quantitative scores of DL and manual segmentation were not significantly different (mean: 3.567 vs. 3.639; 93.8% vs. 97.9% scoring ≥ 3, p = 0.107). In conclusion, the proposed transfer learning DL performed similarly to human radiologists in glioma segmentation on both structural and ADC sequences. Further improvement in segmenting challenging postoperative and multifocal glioma cases is needed.

摘要 脑胶质瘤的分割对于定量评估脑肿瘤、指导治疗研究和临床管理至关重要,但非常耗时。我们需要全自动的多序列磁共振成像分割工具。我们使用公开的数据集 A(n = 210)和 B(n = 369)开发并预训练了一个深度学习(DL)模型,其中包含 FLAIR、T2WI 和对比度增强(CE)-T1WI。然后用我们的机构数据集(n = 197)进行微调,该数据集包含 ADC、T2WI 和 CE-T1WI,由放射科医生手动标注,并分成训练集(n = 100)和测试集(n = 97)。戴斯相似系数(DSC)用于比较模型输出和人工标注。第三位独立的放射科医生以半定量的 5 级评分来评估分割质量。新发和复发胶质瘤之间以及单灶或多灶胶质瘤之间的 DSC 差异采用 Mann-Whitney 检验进行分析。半定量分析采用卡方检验进行比较。我们发现,微调 DL 模型的分段与地面实况人工分段之间有很好的一致性(DSC 中位数:0.729,std-dev:0.134)。新诊断病例的 DSC 值(0.807)高于复发病例(0.698)(p < 0.001),单病灶病例的 DSC 值(0.747)高于多病灶病例(0.613)(p = 0.001)。DL 和人工分割的半定量得分没有显著差异(平均值:3.567 vs. 3.639;93.8% vs. 97.9% 得分≥3,p = 0.107)。总之,在对结构和 ADC 序列进行胶质瘤分割时,所提出的迁移学习 DL 与人类放射科医生的表现相似。在分割具有挑战性的术后和多灶胶质瘤病例方面还需要进一步改进。
{"title":"Auto-segmentation of Adult-Type Diffuse Gliomas: Comparison of Transfer Learning-Based Convolutional Neural Network Model vs. Radiologists","authors":"","doi":"10.1007/s10278-024-01044-7","DOIUrl":"https://doi.org/10.1007/s10278-024-01044-7","url":null,"abstract":"<h3>Abstract</h3> <p>Segmentation of glioma is crucial for quantitative brain tumor assessment, to guide therapeutic research and clinical management, but very time-consuming. Fully automated tools for the segmentation of multi-sequence MRI are needed. We developed and pretrained a deep learning (DL) model using publicly available datasets A (<em>n</em> = 210) and B (<em>n</em> = 369) containing FLAIR, T2WI, and contrast-enhanced (CE)-T1WI. This was then fine-tuned with our institutional dataset (<em>n</em> = 197) containing ADC, T2WI, and CE-T1WI, manually annotated by radiologists, and split into training (<em>n</em> = 100) and testing (<em>n</em> = 97) sets. The Dice similarity coefficient (DSC) was used to compare model outputs and manual labels. A third independent radiologist assessed segmentation quality on a semi-quantitative 5-scale score. Differences in DSC between new and recurrent gliomas, and between uni or multifocal gliomas were analyzed using the Mann–Whitney test. Semi-quantitative analyses were compared using the chi-square test. We found that there was good agreement between segmentations from the fine-tuned DL model and ground truth manual segmentations (median DSC: 0.729, std-dev: 0.134). DSC was higher for newly diagnosed (0.807) than recurrent (0.698) (<em>p</em> &lt; 0.001), and higher for unifocal (0.747) than multi-focal (0.613) cases (<em>p</em> = 0.001). Semi-quantitative scores of DL and manual segmentation were not significantly different (mean: 3.567 vs. 3.639; 93.8% vs. 97.9% scoring ≥ 3, <em>p</em> = 0.107). In conclusion, the proposed transfer learning DL performed similarly to human radiologists in glioma segmentation on both structural and ADC sequences. Further improvement in segmenting challenging postoperative and multifocal glioma cases is needed.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"72 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139918921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing a Radiomics Atlas Dataset of normal Abdominal and Pelvic computed Tomography (RADAPT) 开发正常腹部和盆腔计算机断层扫描放射组学图集数据集 (RADAPT)
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-02-21 DOI: 10.1007/s10278-024-01028-7

Abstract

Atlases of normal genomics, transcriptomics, proteomics, and metabolomics have been published in an attempt to understand the biological phenotype in health and disease and to set the basis of comprehensive comparative omics studies. No such atlas exists for radiomics data. The purpose of this study was to systematically create a radiomics dataset of normal abdominal and pelvic radiomics that can be used for model development and validation. Young adults without any previously known disease, aged > 17 and ≤ 36 years old, were retrospectively included. All patients had undergone CT scanning for emergency indications. In case abnormal findings were identified, the relevant anatomical structures were excluded. Deep learning was used to automatically segment the majority of visible anatomical structures with the TotalSegmentator model as applied in 3DSlicer. Radiomics features including first order, texture, wavelet, and Laplacian of Gaussian transformed features were extracted with PyRadiomics. A Github repository was created to host the resulting dataset. Radiomics data were extracted from a total of 531 patients with a mean age of 26.8 ± 5.19 years, including 250 female and 281 male patients. A maximum of 53 anatomical structures were segmented and used for subsequent radiomics data extraction. Radiomics features were derived from a total of 526 non-contrast and 400 contrast-enhanced (portal venous) series. The dataset is publicly available for model development and validation purposes.

摘要 为了了解健康和疾病中的生物表型,并为全面的全息比较研究奠定基础,正常基因组学、转录组学、蛋白质组学和代谢组学图谱已经出版。放射组学数据还没有这样的图集。本研究的目的是系统地创建一个正常腹部和盆腔放射组学数据集,用于模型开发和验证。研究人员回顾性地纳入了年龄在 17 岁到 36 岁之间、之前未患任何疾病的年轻成年人。所有患者均因紧急情况接受过 CT 扫描。如果发现异常,则排除相关的解剖结构。利用深度学习,使用 3DSlicer 中的 TotalSegmentator 模型自动分割大部分可见的解剖结构。利用 PyRadiomics 提取了放射组学特征,包括一阶、纹理、小波和高斯拉普拉斯变换特征。我们创建了一个 Github 存储库,以托管生成的数据集。共从 531 名平均年龄为 26.8 ± 5.19 岁的患者中提取了放射组学数据,其中包括 250 名女性患者和 281 名男性患者。最多有 53 个解剖结构被分割并用于随后的放射组学数据提取。共从 526 个非对比度和 400 个对比度增强(门静脉)序列中提取了放射组学特征。该数据集可公开用于模型开发和验证。
{"title":"Developing a Radiomics Atlas Dataset of normal Abdominal and Pelvic computed Tomography (RADAPT)","authors":"","doi":"10.1007/s10278-024-01028-7","DOIUrl":"https://doi.org/10.1007/s10278-024-01028-7","url":null,"abstract":"<h3>Abstract</h3> <p>Atlases of normal genomics, transcriptomics, proteomics, and metabolomics have been published in an attempt to understand the biological phenotype in health and disease and to set the basis of comprehensive comparative omics studies. No such atlas exists for radiomics data. The purpose of this study was to systematically create a radiomics dataset of normal abdominal and pelvic radiomics that can be used for model development and validation. Young adults without any previously known disease, aged &gt; 17 and ≤ 36 years old, were retrospectively included. All patients had undergone CT scanning for emergency indications. In case abnormal findings were identified, the relevant anatomical structures were excluded. Deep learning was used to automatically segment the majority of visible anatomical structures with the TotalSegmentator model as applied in 3DSlicer. Radiomics features including first order, texture, wavelet, and Laplacian of Gaussian transformed features were extracted with PyRadiomics. A Github repository was created to host the resulting dataset. Radiomics data were extracted from a total of 531 patients with a mean age of 26.8 ± 5.19 years, including 250 female and 281 male patients. A maximum of 53 anatomical structures were segmented and used for subsequent radiomics data extraction. Radiomics features were derived from a total of 526 non-contrast and 400 contrast-enhanced (portal venous) series. The dataset is publicly available for model development and validation purposes.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"2 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139919002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Tracking of Hyoid Bone Displacement and Rotation Relative to Cervical Vertebrae in Videofluoroscopic Swallow Studies Using Deep Learning 利用深度学习自动跟踪视频荧光屏吞咽研究中相对于颈椎的舌骨位移和旋转
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-02-21 DOI: 10.1007/s10278-024-01039-4
Wuqi Li, Shitong Mao, Amanda S. Mahoney, James L. Coyle, Ervin Sejdić

The hyoid bone displacement and rotation are critical kinematic events of the swallowing process in the assessment of videofluoroscopic swallow studies (VFSS). However, the quantitative analysis of such events requires frame-by-frame manual annotation, which is labor-intensive and time-consuming. Our work aims to develop a method of automatically tracking hyoid bone displacement and rotation in VFSS. We proposed a full high-resolution network, a deep learning architecture, to detect the anterior and posterior of the hyoid bone to identify its location and rotation. Meanwhile, the anterior-inferior corners of the C2 and C4 vertebrae were detected simultaneously to automatically establish a new coordinate system and eliminate the effect of posture change. The proposed model was developed by 59,468 VFSS frames collected from 1488 swallowing samples, and it achieved an average landmark localization error of 2.38 pixels (around 0.5% of the image with 448 × 448 pixels) and an average angle prediction error of 0.065 radians in predicting C2–C4 and hyoid bone angles. In addition, the displacement of the hyoid bone center was automatically tracked on a frame-by-frame analysis, achieving an average mean absolute error of 2.22 pixels and 2.78 pixels in the x-axis and y-axis, respectively. The results of this study support the effectiveness and accuracy of the proposed method in detecting hyoid bone displacement and rotation. Our study provided an automatic method of analyzing hyoid bone kinematics during VFSS, which could contribute to early diagnosis and effective disease management.

在视频荧光屏吞咽研究(VFSS)的评估中,舌骨位移和旋转是吞咽过程中的关键运动学事件。然而,对此类事件的定量分析需要逐帧手动标注,耗费大量人力和时间。我们的工作旨在开发一种在 VFSS 中自动跟踪舌骨位移和旋转的方法。我们提出了一种深度学习架构的全高分辨率网络,用于检测舌骨的前方和后方,以识别其位置和旋转。同时,我们还同时检测了 C2 和 C4 椎体的前后角,以自动建立新的坐标系,消除姿势变化的影响。从 1488 个吞咽样本中收集的 59468 个 VFSS 帧建立了所提出的模型,在预测 C2-C4 和舌骨角度时,平均地标定位误差为 2.38 像素(约为 448 × 448 像素图像的 0.5%),平均角度预测误差为 0.065 弧度。此外,通过逐帧分析自动跟踪舌骨中心的位移,X 轴和 Y 轴的平均绝对误差分别为 2.22 像素和 2.78 像素。这项研究的结果证明了所提出的方法在检测舌骨位移和旋转方面的有效性和准确性。我们的研究提供了一种自动分析 VFSS 期间舌骨运动学的方法,有助于早期诊断和有效的疾病管理。
{"title":"Automatic Tracking of Hyoid Bone Displacement and Rotation Relative to Cervical Vertebrae in Videofluoroscopic Swallow Studies Using Deep Learning","authors":"Wuqi Li, Shitong Mao, Amanda S. Mahoney, James L. Coyle, Ervin Sejdić","doi":"10.1007/s10278-024-01039-4","DOIUrl":"https://doi.org/10.1007/s10278-024-01039-4","url":null,"abstract":"<p>The hyoid bone displacement and rotation are critical kinematic events of the swallowing process in the assessment of videofluoroscopic swallow studies (VFSS). However, the quantitative analysis of such events requires frame-by-frame manual annotation, which is labor-intensive and time-consuming. Our work aims to develop a method of automatically tracking hyoid bone displacement and rotation in VFSS. We proposed a full high-resolution network, a deep learning architecture, to detect the anterior and posterior of the hyoid bone to identify its location and rotation. Meanwhile, the anterior-inferior corners of the C2 and C4 vertebrae were detected simultaneously to automatically establish a new coordinate system and eliminate the effect of posture change. The proposed model was developed by 59,468 VFSS frames collected from 1488 swallowing samples, and it achieved an average landmark localization error of 2.38 pixels (around 0.5% of the image with 448 × 448 pixels) and an average angle prediction error of 0.065 radians in predicting C2–C4 and hyoid bone angles. In addition, the displacement of the hyoid bone center was automatically tracked on a frame-by-frame analysis, achieving an average mean absolute error of 2.22 pixels and 2.78 pixels in the <i>x</i>-axis and <i>y</i>-axis, respectively. The results of this study support the effectiveness and accuracy of the proposed method in detecting hyoid bone displacement and rotation. Our study provided an automatic method of analyzing hyoid bone kinematics during VFSS, which could contribute to early diagnosis and effective disease management.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"5 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139918916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized Impression Generation for PET Reports Using Large Language Models 利用大型语言模型为 PET 报告生成个性化印象
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-02-02 DOI: 10.1007/s10278-024-00985-3
Xin Tie, Muheon Shin, Ali Pirasteh, Nevein Ibrahim, Zachary Huemann, Sharon M. Castellino, Kara M. Kelly, John Garrett, Junjie Hu, Steve Y. Cho, Tyler J. Bradshaw

Large language models (LLMs) have shown promise in accelerating radiology reporting by summarizing clinical findings into impressions. However, automatic impression generation for whole-body PET reports presents unique challenges and has received little attention. Our study aimed to evaluate whether LLMs can create clinically useful impressions for PET reporting. To this end, we fine-tuned twelve open-source language models on a corpus of 37,370 retrospective PET reports collected from our institution. All models were trained using the teacher-forcing algorithm, with the report findings and patient information as input and the original clinical impressions as reference. An extra input token encoded the reading physician’s identity, allowing models to learn physician-specific reporting styles. To compare the performances of different models, we computed various automatic evaluation metrics and benchmarked them against physician preferences, ultimately selecting PEGASUS as the top LLM. To evaluate its clinical utility, three nuclear medicine physicians assessed the PEGASUS-generated impressions and original clinical impressions across 6 quality dimensions (3-point scales) and an overall utility score (5-point scale). Each physician reviewed 12 of their own reports and 12 reports from other physicians. When physicians assessed LLM impressions generated in their own style, 89% were considered clinically acceptable, with a mean utility score of 4.08/5. On average, physicians rated these personalized impressions as comparable in overall utility to the impressions dictated by other physicians (4.03, P = 0.41). In summary, our study demonstrated that personalized impressions generated by PEGASUS were clinically useful in most cases, highlighting its potential to expedite PET reporting by automatically drafting impressions.

大语言模型(LLM)通过将临床发现总结为印象,在加速放射学报告方面已显示出良好的前景。然而,全身 PET 报告的自动印象生成却面临着独特的挑战,很少受到关注。我们的研究旨在评估 LLM 能否为 PET 报告生成临床有用的印象。为此,我们在本机构收集的 37,370 份回顾性 PET 报告语料库上对 12 个开源语言模型进行了微调。所有模型均采用教师强迫算法进行训练,以报告结果和患者信息作为输入,原始临床印象作为参考。额外的输入标记对阅读医生的身份进行编码,使模型能够学习医生特定的报告风格。为了比较不同模型的性能,我们计算了各种自动评估指标,并根据医生的偏好进行了基准测试,最终选择 PEGASUS 作为最佳 LLM。为了评估 PEGASUS 的临床实用性,三名核医学医生从 6 个质量维度(3 分制)和一个总体实用性评分(5 分制)对 PEGASUS 生成的印象和原始临床印象进行了评估。每位医生都审查了自己的 12 份报告和其他医生的 12 份报告。当医生评估以自己的风格生成的 LLM 印象时,89% 被认为在临床上是可接受的,平均效用分数为 4.08/5。平均而言,医生对这些个性化印象的总体效用评价与其他医生口述的印象相当(4.03,P = 0.41)。总之,我们的研究表明,PEGASUS 生成的个性化印象在大多数情况下都具有临床实用性,突显了其通过自动起草印象加快 PET 报告的潜力。
{"title":"Personalized Impression Generation for PET Reports Using Large Language Models","authors":"Xin Tie, Muheon Shin, Ali Pirasteh, Nevein Ibrahim, Zachary Huemann, Sharon M. Castellino, Kara M. Kelly, John Garrett, Junjie Hu, Steve Y. Cho, Tyler J. Bradshaw","doi":"10.1007/s10278-024-00985-3","DOIUrl":"https://doi.org/10.1007/s10278-024-00985-3","url":null,"abstract":"<p>Large language models (LLMs) have shown promise in accelerating radiology reporting by summarizing clinical findings into impressions. However, automatic impression generation for whole-body PET reports presents unique challenges and has received little attention. Our study aimed to evaluate whether LLMs can create clinically useful impressions for PET reporting. To this end, we fine-tuned twelve open-source language models on a corpus of 37,370 retrospective PET reports collected from our institution. All models were trained using the teacher-forcing algorithm, with the report findings and patient information as input and the original clinical impressions as reference. An extra input token encoded the reading physician’s identity, allowing models to learn physician-specific reporting styles. To compare the performances of different models, we computed various automatic evaluation metrics and benchmarked them against physician preferences, ultimately selecting PEGASUS as the top LLM. To evaluate its clinical utility, three nuclear medicine physicians assessed the PEGASUS-generated impressions and original clinical impressions across 6 quality dimensions (3-point scales) and an overall utility score (5-point scale). Each physician reviewed 12 of their own reports and 12 reports from other physicians. When physicians assessed LLM impressions generated in their own style, 89% were considered clinically acceptable, with a mean utility score of 4.08/5. On average, physicians rated these personalized impressions as comparable in overall utility to the impressions dictated by other physicians (4.03, P = 0.41). In summary, our study demonstrated that personalized impressions generated by PEGASUS were clinically useful in most cases, highlighting its potential to expedite PET reporting by automatically drafting impressions.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"245 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139669520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Class Deep Learning Model for Detecting Pediatric Distal Forearm Fractures Based on the AO/OTA Classification 基于 AO/OTA 分类检测小儿前臂远端骨折的多类深度学习模型
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-02-02 DOI: 10.1007/s10278-024-00968-4
Le Nguyen Binh, Nguyen Thanh Nhu, Vu Pham Thao Vy, Do Le Hoang Son, Truong Nguyen Khanh Hung, Nguyen Bach, Hoang Quoc Huy, Le Van Tuan, Nguyen Quoc Khanh Le, Jiunn-Horng Kang

Common pediatric distal forearm fractures necessitate precise detection. To support prompt treatment planning by clinicians, our study aimed to create a multi-class convolutional neural network (CNN) model for pediatric distal forearm fractures, guided by the AO Foundation/Orthopaedic Trauma Association (AO/ATO) classification system for pediatric fractures. The GRAZPEDWRI-DX dataset (2008–2018) of wrist X-ray images was used. We labeled images into four fracture classes (FRM, FUM, FRE, and FUE with F, fracture; R, radius; U, ulna; M, metaphysis; and E, epiphysis) based on the pediatric AO/ATO classification. We performed multi-class classification by training a YOLOv4-based CNN object detection model with 7006 images from 1809 patients (80% for training and 20% for validation). An 88-image test set from 34 patients was used to evaluate the model performance, which was then compared to the diagnosis performances of two readers—an orthopedist and a radiologist. The overall mean average precision levels on the validation set in four classes of the model were 0.97, 0.92, 0.95, and 0.94, respectively. On the test set, the model’s performance included sensitivities of 0.86, 0.71, 0.88, and 0.89; specificities of 0.88, 0.94, 0.97, and 0.98; and area under the curve (AUC) values of 0.87, 0.83, 0.93, and 0.94, respectively. The best performance among the three readers belonged to the radiologist, with a mean AUC of 0.922, followed by our model (0.892) and the orthopedist (0.830). Therefore, using the AO/OTA concept, our multi-class fracture detection model excelled in identifying pediatric distal forearm fractures.

常见的小儿前臂远端骨折需要精确检测。为了支持临床医生及时制定治疗计划,我们的研究以 AO 基金会/矫形外科创伤协会(AO/ATO)的小儿骨折分类系统为指导,旨在创建一个针对小儿前臂远端骨折的多类卷积神经网络(CNN)模型。我们使用了 GRAZPEDWRI-DX 数据集(2008-2018 年)的腕部 X 光图像。根据儿科 AO/ATO 分类,我们将图像标记为四个骨折类别(FRM、FUM、FRE 和 FUE,其中 F 代表骨折;R 代表桡骨;U 代表尺骨;M 代表干骺端;E 代表骨骺)。我们使用来自 1809 名患者的 7006 张图像(80% 用于训练,20% 用于验证)训练了基于 YOLOv4 的 CNN 物体检测模型,从而进行了多类分类。我们使用来自 34 名患者的 88 张图像测试集来评估模型的性能,然后将其与两名读者--一名骨科医生和一名放射科医生--的诊断结果进行比较。在验证集上,模型四个等级的总体平均精确度分别为 0.97、0.92、0.95 和 0.94。在测试集中,模型的灵敏度分别为 0.86、0.71、0.88 和 0.89;特异度分别为 0.88、0.94、0.97 和 0.98;曲线下面积 (AUC) 值分别为 0.87、0.83、0.93 和 0.94。三位读者中表现最好的是放射科医生,平均 AUC 值为 0.922,其次是我们的模型(0.892)和骨科医生(0.830)。因此,利用 AO/OTA 概念,我们的多类骨折检测模型在识别小儿前臂远端骨折方面表现出色。
{"title":"Multi-Class Deep Learning Model for Detecting Pediatric Distal Forearm Fractures Based on the AO/OTA Classification","authors":"Le Nguyen Binh, Nguyen Thanh Nhu, Vu Pham Thao Vy, Do Le Hoang Son, Truong Nguyen Khanh Hung, Nguyen Bach, Hoang Quoc Huy, Le Van Tuan, Nguyen Quoc Khanh Le, Jiunn-Horng Kang","doi":"10.1007/s10278-024-00968-4","DOIUrl":"https://doi.org/10.1007/s10278-024-00968-4","url":null,"abstract":"<p>Common pediatric distal forearm fractures necessitate precise detection. To support prompt treatment planning by clinicians, our study aimed to create a multi-class convolutional neural network (CNN) model for pediatric distal forearm fractures, guided by the AO Foundation/Orthopaedic Trauma Association (AO/ATO) classification system for pediatric fractures. The GRAZPEDWRI-DX dataset (2008–2018) of wrist X-ray images was used. We labeled images into four fracture classes (FRM, FUM, FRE, and FUE with F, fracture; R, radius; U, ulna; M, metaphysis; and E, epiphysis) based on the pediatric AO/ATO classification. We performed multi-class classification by training a YOLOv4-based CNN object detection model with 7006 images from 1809 patients (80% for training and 20% for validation). An 88-image test set from 34 patients was used to evaluate the model performance, which was then compared to the diagnosis performances of two readers—an orthopedist and a radiologist. The overall mean average precision levels on the validation set in four classes of the model were 0.97, 0.92, 0.95, and 0.94, respectively. On the test set, the model’s performance included sensitivities of 0.86, 0.71, 0.88, and 0.89; specificities of 0.88, 0.94, 0.97, and 0.98; and area under the curve (AUC) values of 0.87, 0.83, 0.93, and 0.94, respectively. The best performance among the three readers belonged to the radiologist, with a mean AUC of 0.922, followed by our model (0.892) and the orthopedist (0.830). Therefore, using the AO/OTA concept, our multi-class fracture detection model excelled in identifying pediatric distal forearm fractures.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"11 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139669516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of Mucosal Healing in Crohn’s Disease: Radiomics Models of Intestinal Wall and Mesenteric Fat Based on Dual-Energy CT 评估克罗恩病的黏膜愈合:基于双能量 CT 的肠壁和肠系膜脂肪放射组学模型
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-02-01 DOI: 10.1007/s10278-024-00989-z

Abstract

This study aims to assess the effectiveness of radiomics signatures obtained from dual-energy computed tomography enterography (DECTE) in the evaluation of mucosal healing (MH) in patients diagnosed with Crohn’s disease (CD). In this study, 106 CD patients with a total of 221 diseased intestinal segments (79 with MH and 142 non-MH) from two medical centers were included and randomly divided into training and testing cohorts at a ratio of 7:3. Radiomics features were extracted from the enteric phase iodine maps and 40-kev and 70-kev virtual monoenergetic images (VMIs) of the diseased intestinal segments, as well as from mesenteric fat. Feature selection was performed using the least absolute shrinkage and selection operator (LASSO) logistic regression. Radiomics models were subsequently established, and the accuracy of these models in identifying MH in CD was assessed by calculating the area under the receiver operating characteristic curve (AUC). The combined-iodine model formulated by integrating the intestinal and mesenteric fat radiomics features of iodine maps exhibited the most favorable performance in evaluating MH, with AUCs of 0.989 (95% confidence interval (CI) 0.977–1.000) in the training cohort and 0.947 (95% CI 0.884–1.000) in the testing cohort. Patients categorized as high risk by the combined-iodine model displayed a greater probability of experiencing disease progression when contrasted with low-risk patients. The combined-iodine radiomics model, which is built upon iodine maps of diseased intestinal segments and mesenteric fat, has demonstrated promising performance in evaluating MH in CD patients.

摘要 本研究旨在评估从双能计算机断层扫描肠造影(DECTE)中获得的放射组学特征在评估克罗恩病(CD)患者粘膜愈合(MH)中的有效性。这项研究纳入了来自两个医疗中心的106名克罗恩病患者,共221个病变肠段(79个MH肠段和142个非MH肠段),并按7:3的比例随机分为训练组和测试组。从病变肠段的肠相碘图、40-kev 和 70-kev 虚拟单能图像(VMI)以及肠系膜脂肪中提取放射组学特征。特征选择采用最小绝对收缩和选择算子(LASSO)逻辑回归法。随后建立了放射组学模型,并通过计算接收者操作特征曲线下面积(AUC)评估了这些模型在鉴别 CD 中 MH 的准确性。通过整合碘图的肠道和肠系膜脂肪放射组学特征而建立的联合碘模型在评估MH方面表现最出色,训练队列中的AUC为0.989(95%置信区间(CI)0.977-1.000),测试队列中的AUC为0.947(95%置信区间(CI)0.884-1.000)。与低风险患者相比,被联合碘模型归类为高风险的患者出现疾病进展的概率更高。联合碘放射组学模型建立在病变肠段和肠系膜脂肪的碘图基础上,在评估 CD 患者的 MH 方面表现出了良好的性能。
{"title":"Evaluation of Mucosal Healing in Crohn’s Disease: Radiomics Models of Intestinal Wall and Mesenteric Fat Based on Dual-Energy CT","authors":"","doi":"10.1007/s10278-024-00989-z","DOIUrl":"https://doi.org/10.1007/s10278-024-00989-z","url":null,"abstract":"<h3>Abstract</h3> <p>This study aims to assess the effectiveness of radiomics signatures obtained from dual-energy computed tomography enterography (DECTE) in the evaluation of mucosal healing (MH) in patients diagnosed with Crohn’s disease (CD). In this study, 106 CD patients with a total of 221 diseased intestinal segments (79 with MH and 142 non-MH) from two medical centers were included and randomly divided into training and testing cohorts at a ratio of 7:3. Radiomics features were extracted from the enteric phase iodine maps and 40-kev and 70-kev virtual monoenergetic images (VMIs) of the diseased intestinal segments, as well as from mesenteric fat. Feature selection was performed using the least absolute shrinkage and selection operator (LASSO) logistic regression. Radiomics models were subsequently established, and the accuracy of these models in identifying MH in CD was assessed by calculating the area under the receiver operating characteristic curve (AUC). The combined-iodine model formulated by integrating the intestinal and mesenteric fat radiomics features of iodine maps exhibited the most favorable performance in evaluating MH, with AUCs of 0.989 (95% confidence interval (CI) 0.977–1.000) in the training cohort and 0.947 (95% CI 0.884–1.000) in the testing cohort. Patients categorized as high risk by the combined-iodine model displayed a greater probability of experiencing disease progression when contrasted with low-risk patients. The combined-iodine radiomics model, which is built upon iodine maps of diseased intestinal segments and mesenteric fat, has demonstrated promising performance in evaluating MH in CD patients.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"20 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139669648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impacts of Adaptive Statistical Iterative Reconstruction-V and Deep Learning Image Reconstruction Algorithms on Robustness of CT Radiomics Features: Opportunity for Minimizing Radiomics Variability Among Scans of Different Dose Levels 自适应统计迭代重建-V 和深度学习图像重建算法对 CT 放射组学特征鲁棒性的影响:最大限度降低不同剂量水平扫描的放射组学变异性的机会
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-29 DOI: 10.1007/s10278-023-00901-1
Jingyu Zhong, Zhiyuan Wu, Lingyun Wang, Yong Chen, Yihan Xia, Lan Wang, Jianying Li, Wei Lu, Xiaomeng Shi, Jianxing Feng, Haipeng Dong, Huan Zhang, Weiwu Yao

This study aims to investigate the influence of adaptive statistical iterative reconstruction-V (ASIR-V) and deep learning image reconstruction (DLIR) on CT radiomics feature robustness. A standardized phantom was scanned under single-energy CT (SECT) and dual-energy CT (DECT) modes at standard and low (20 and 10 mGy) dose levels. Images of SECT 120 kVp and corresponding DECT 120 kVp-like virtual monochromatic images were generated with filtered back-projection (FBP), ASIR-V at 40% (AV-40) and 100% (AV-100) blending levels, and DLIR algorithm at low (DLIR-L), medium (DLIR-M), and high (DLIR-H) strength levels. Ninety-four features were extracted via Pyradiomics. Reproducibility of features was calculated between standard and low dose levels, between reconstruction algorithms in reference to FBP images, and within scan mode, using intraclass correlation coefficient (ICC) and concordance correlation coefficient (CCC). The average percentage of features with ICC > 0.90 and CCC > 0.90 between the two dose levels was 21.28% and 20.75% in AV-40 images, and 39.90% and 35.11% in AV-100 images, respectively, and increased from 15.43 to 45.22% and from 15.43 to 44.15% with an increasing strength level of DLIR. The average percentage of features with ICC > 0.90 and CCC > 0.90 in reference to FBP images was 26.07% and 25.80% in AV-40 images, and 18.88% and 18.62% in AV-100 images, respectively, and decreased from 27.93 to 17.82% and from 27.66 to 17.29% with an increasing strength level of DLIR. DLIR and ASIR-V algorithms showed low reproducibility in reference to FBP images, while the high-strength DLIR algorithm provides an opportunity for minimizing radiomics variability due to dose reduction.

本研究旨在探讨自适应统计迭代重建-V(ASIR-V)和深度学习图像重建(DLIR)对 CT 放射组学特征鲁棒性的影响。在标准和低剂量(20 mGy 和 10 mGy)水平下,以单能量 CT (SECT) 和双能量 CT (DECT) 模式扫描标准化模型。利用滤波后投影(FBP)、40%(AV-40)和 100%(AV-100)混合水平的 ASIR-V 以及低(DLIR-L)、中(DLIR-M)和高(DLIR-H)强度水平的 DLIR 算法生成 SECT 120 kVp 图像和相应的 DECT 120 kVp-like 虚拟单色图像。通过 Pyradiomics 提取了 94 个特征。使用类内相关系数(ICC)和一致性相关系数(CCC)计算了标准剂量水平和低剂量水平之间、参照 FBP 图像的重建算法之间以及扫描模式内部的特征再现性。在两个剂量水平之间,ICC > 0.90 和 CCC > 0.90 的特征平均百分比在 AV-40 图像中分别为 21.28% 和 20.75%,在 AV-100 图像中分别为 39.90% 和 35.11%,并且随着 DLIR 强度水平的增加,ICC > 0.90 和 CCC > 0.90 的特征平均百分比分别从 15.43% 增加到 45.22%,从 15.43% 增加到 44.15%。参照 FBP 图像,ICC > 0.90 和 CCC > 0.90 的平均特征百分比在 AV-40 图像中分别为 26.07% 和 25.80%,在 AV-100 图像中分别为 18.88% 和 18.62%,随着 DLIR 强度的增加,分别从 27.93% 降至 17.82% 和从 27.66% 降至 17.29%。DLIR和ASIR-V算法在参考FBP图像时显示出较低的可重复性,而高强度DLIR算法则提供了一个机会,可最大限度地减少因剂量减少而导致的放射组学变异性。
{"title":"Impacts of Adaptive Statistical Iterative Reconstruction-V and Deep Learning Image Reconstruction Algorithms on Robustness of CT Radiomics Features: Opportunity for Minimizing Radiomics Variability Among Scans of Different Dose Levels","authors":"Jingyu Zhong, Zhiyuan Wu, Lingyun Wang, Yong Chen, Yihan Xia, Lan Wang, Jianying Li, Wei Lu, Xiaomeng Shi, Jianxing Feng, Haipeng Dong, Huan Zhang, Weiwu Yao","doi":"10.1007/s10278-023-00901-1","DOIUrl":"https://doi.org/10.1007/s10278-023-00901-1","url":null,"abstract":"<p>This study aims to investigate the influence of adaptive statistical iterative reconstruction-V (ASIR-V) and deep learning image reconstruction (DLIR) on CT radiomics feature robustness. A standardized phantom was scanned under single-energy CT (SECT) and dual-energy CT (DECT) modes at standard and low (20 and 10 mGy) dose levels. Images of SECT 120 kVp and corresponding DECT 120 kVp-like virtual monochromatic images were generated with filtered back-projection (FBP), ASIR-V at 40% (AV-40) and 100% (AV-100) blending levels, and DLIR algorithm at low (DLIR-L), medium (DLIR-M), and high (DLIR-H) strength levels. Ninety-four features were extracted via Pyradiomics. Reproducibility of features was calculated between standard and low dose levels, between reconstruction algorithms in reference to FBP images, and within scan mode, using intraclass correlation coefficient (ICC) and concordance correlation coefficient (CCC). The average percentage of features with ICC &gt; 0.90 and CCC &gt; 0.90 between the two dose levels was 21.28% and 20.75% in AV-40 images, and 39.90% and 35.11% in AV-100 images, respectively, and increased from 15.43 to 45.22% and from 15.43 to 44.15% with an increasing strength level of DLIR. The average percentage of features with ICC &gt; 0.90 and CCC &gt; 0.90 in reference to FBP images was 26.07% and 25.80% in AV-40 images, and 18.88% and 18.62% in AV-100 images, respectively, and decreased from 27.93 to 17.82% and from 27.66 to 17.29% with an increasing strength level of DLIR. DLIR and ASIR-V algorithms showed low reproducibility in reference to FBP images, while the high-strength DLIR algorithm provides an opportunity for minimizing radiomics variability due to dose reduction.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"9 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139586136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Automatic Framework for Nasal Esthetic Assessment by ResNet Convolutional Neural Network 利用 ResNet 卷积神经网络进行鼻腔美学评估的自动框架
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-29 DOI: 10.1007/s10278-024-00973-7

Abstract

Nasal base aesthetics is an interesting and challenging issue that attracts the attention of researchers in recent years. With that insight, in this study, we propose a novel automatic framework (AF) for evaluating the nasal base which can be useful to improve the symmetry in rhinoplasty and reconstruction. The introduced AF includes a hybrid model for nasal base landmarks recognition and a combined model for predicting nasal base symmetry. The proposed state-of-the-art nasal base landmark detection model is trained on the nasal base images for comprehensive qualitative and quantitative assessments. Then, the deep convolutional neural networks (CNN) and multi-layer perceptron neural network (MLP) models are integrated by concatenating their last hidden layer to evaluate the nasal base symmetry based on geometry features and tiled images of the nasal base. This study explores the concept of data augmentation by applying the methods motivated via commonly used image augmentation techniques. According to the experimental findings, the results of the AF are closely related to the otolaryngologists’ ratings and are useful for preoperative planning, intraoperative decision-making, and postoperative assessment. Furthermore, the visualization indicates that the proposed AF is capable of predicting the nasal base symmetry and capturing asymmetry areas to facilitate semantic predictions. The codes are accessible at https://github.com/AshooriMaryam/Nasal-Aesthetic-Assessment-Deep-learning.

摘要 鼻基底美学是一个有趣而具有挑战性的问题,近年来吸引了众多研究人员的关注。有鉴于此,我们在本研究中提出了一种新型的鼻基底自动评估框架(AF),可用于改善鼻整形和鼻重建中的对称性。引入的自动框架包括一个用于识别鼻基底地标的混合模型和一个用于预测鼻基底对称性的组合模型。所提出的最先进的鼻基底地标检测模型在鼻基底图像上进行训练,以进行全面的定性和定量评估。然后,将深度卷积神经网络(CNN)和多层感知器神经网络(MLP)模型的最后一个隐藏层合并起来,根据几何特征和鼻基底的平铺图像来评估鼻基底对称性。本研究通过应用常用图像增强技术所激发的方法,探索了数据增强的概念。实验结果表明,AF 的结果与耳鼻喉科医生的评分密切相关,对术前规划、术中决策和术后评估非常有用。此外,可视化结果表明,所提出的 AF 能够预测鼻基底对称性并捕捉不对称区域,从而促进语义预测。代码可在 https://github.com/AshooriMaryam/Nasal-Aesthetic-Assessment-Deep-learning 上查阅。
{"title":"An Automatic Framework for Nasal Esthetic Assessment by ResNet Convolutional Neural Network","authors":"","doi":"10.1007/s10278-024-00973-7","DOIUrl":"https://doi.org/10.1007/s10278-024-00973-7","url":null,"abstract":"<h3>Abstract</h3> <p>Nasal base aesthetics is an interesting and challenging issue that attracts the attention of researchers in recent years. With that insight, in this study, we propose a novel automatic framework (AF) for evaluating the nasal base which can be useful to improve the symmetry in rhinoplasty and reconstruction. The introduced AF includes a hybrid model for nasal base landmarks recognition and a combined model for predicting nasal base symmetry. The proposed state-of-the-art nasal base landmark detection model is trained on the nasal base images for comprehensive qualitative and quantitative assessments. Then, the deep convolutional neural networks (CNN) and multi-layer perceptron neural network (MLP) models are integrated by concatenating their last hidden layer to evaluate the nasal base symmetry based on geometry features and tiled images of the nasal base. This study explores the concept of data augmentation by applying the methods motivated via commonly used image augmentation techniques. According to the experimental findings, the results of the AF are closely related to the otolaryngologists’ ratings and are useful for preoperative planning, intraoperative decision-making, and postoperative assessment. Furthermore, the visualization indicates that the proposed AF is capable of predicting the nasal base symmetry and capturing asymmetry areas to facilitate semantic predictions. The codes are accessible at https://github.com/AshooriMaryam/Nasal-Aesthetic-Assessment-Deep-learning.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"28 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139586393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Review of the Free Research Software for Computer-Assisted Interventions 计算机辅助干预免费研究软件回顾
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-29 DOI: 10.1007/s10278-023-00912-y
Zaiba Amla, Parminder Singh Khehra, Ashley Mathialagan, Elodie Lugez

Research software is continuously developed to facilitate progress and innovation in the medical field. Over time, numerous research software programs have been created, making it challenging to keep abreast of what is available. This work aims to evaluate the most frequently utilized software by the computer-assisted intervention (CAI) research community. The software assessments encompass a range of criteria, including load time, stress load, multi-tasking, extensibility and range of functionalities, user-friendliness, documentation, and technical support. A total of eight software programs were selected: 3D Slicer, Elastix, ITK-SNAP, MedInria, MeVisLab, MIPAV, and Seg3D. While none of the software was found to be perfect on all evaluation criteria, 3D Slicer and ITK-SNAP emerged with the highest rankings overall. These two software programs could frequently complement each other, as 3D Slicer has a broad and customizable range of features, while ITK-SNAP excels at performing fundamental tasks in an efficient manner. Nonetheless, each software had distinctive features that may better fit the requirements of certain research projects. This review provides valuable information to CAI researchers seeking the best-suited software to support their projects. The evaluation also offers insights for the software development teams, as it highlights areas where the software can be improved.

为了促进医学领域的进步和创新,研究软件不断得到开发。随着时间的推移,无数研究软件应运而生,这使得跟上现有软件的步伐成为一项挑战。这项工作旨在评估计算机辅助干预(CAI)研究界最常用的软件。软件评估包含一系列标准,包括加载时间、压力负荷、多任务处理、可扩展性和功能范围、用户友好性、文档和技术支持。共有八款软件入选:3D Slicer、Elastix、ITK-SNAP、MedInria、MeVisLab、MIPAV 和 Seg3D。虽然没有一款软件在所有评估标准上都是完美无缺的,但 3D Slicer 和 ITK-SNAP 的综合排名最高。这两款软件经常可以互补,因为 3D Slicer 具有广泛的可定制功能,而 ITK-SNAP 则擅长高效执行基本任务。不过,每种软件都有其独特的功能,可能更适合某些研究项目的要求。本综述为 CAI 研究人员提供了宝贵的信息,帮助他们寻找最适合的软件来支持自己的项目。评估还为软件开发团队提供了见解,因为它强调了软件可以改进的地方。
{"title":"Review of the Free Research Software for Computer-Assisted Interventions","authors":"Zaiba Amla, Parminder Singh Khehra, Ashley Mathialagan, Elodie Lugez","doi":"10.1007/s10278-023-00912-y","DOIUrl":"https://doi.org/10.1007/s10278-023-00912-y","url":null,"abstract":"<p>Research software is continuously developed to facilitate progress and innovation in the medical field. Over time, numerous research software programs have been created, making it challenging to keep abreast of what is available. This work aims to evaluate the most frequently utilized software by the computer-assisted intervention (CAI) research community. The software assessments encompass a range of criteria, including load time, stress load, multi-tasking, extensibility and range of functionalities, user-friendliness, documentation, and technical support. A total of eight software programs were selected: 3D Slicer, Elastix, ITK-SNAP, MedInria, MeVisLab, MIPAV, and Seg3D. While none of the software was found to be perfect on all evaluation criteria, 3D Slicer and ITK-SNAP emerged with the highest rankings overall. These two software programs could frequently complement each other, as 3D Slicer has a broad and customizable range of features, while ITK-SNAP excels at performing fundamental tasks in an efficient manner. Nonetheless, each software had distinctive features that may better fit the requirements of certain research projects. This review provides valuable information to CAI researchers seeking the best-suited software to support their projects. The evaluation also offers insights for the software development teams, as it highlights areas where the software can be improved.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"10 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139586139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Digital Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1