首页 > 最新文献

Journal of Digital Imaging最新文献

英文 中文
CT-Based Radiomics and Machine Learning for Differentiating Benign, Borderline, and Early-Stage Malignant Ovarian Tumors 基于 CT 的放射组学和机器学习用于区分良性、边缘性和早期恶性卵巢肿瘤
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-11 DOI: 10.1007/s10278-023-00903-z
Jia Chen, Lei Liu, Ziying He, Danke Su, Chanzhen Liu

To explore the value of CT-based radiomics model in the differential diagnosis of benign ovarian tumors (BeOTs), borderline ovarian tumors (BOTs), and early malignant ovarian tumors (eMOTs). The retrospective research was conducted with pathologically confirmed 258 ovarian tumor patients from January 2014 to February 2021. The patients were randomly allocated to a training cohort (n = 198) and a test cohort (n = 60). By providing a three-dimensional (3D) characterization of the volume of interest (VOI) at the maximum level of images, 4238 radiomic features were extracted from the VOI per patient. The Wilcoxon–Mann–Whitney (WMW) test, least absolute shrinkage and selection operator (LASSO), and support vector machine (SVM) were employed to select the radiomic features. Five machine learning (ML) algorithms were applied to construct three-class diagnostic models. Leave-one-out cross-validation (LOOCV) was implemented to evaluate the performance of the radiomics models. The test cohort was used to verify the generalization ability of the radiomics models. The receiver-operating characteristic (ROC) was used to evaluate diagnostic performance of radiomics model. Global and discrimination performance of five models was evaluated by average area under the ROC curve (AUC). The average ROC indicated that random forest (RF) diagnostic model in training cohort demonstrated the best diagnostic performance (micro/macro average AUC, 0.98/0.99), which was then confirmed with by LOOCV (micro/macro average AUC, 0.89/0.88) and external validation (test cohort) (micro/macro average AUC, 0.81/0.79). Our proposed CT-based radiomics diagnostic models may effectively assist in preoperatively differentiating BeOTs, BOTs, and eMOTs.

探讨基于CT的放射组学模型在良性卵巢肿瘤(BeOTs)、边缘性卵巢肿瘤(BOTs)和早期恶性卵巢肿瘤(eMOTs)鉴别诊断中的价值。这项回顾性研究的对象是2014年1月至2021年2月期间经病理确诊的258名卵巢肿瘤患者。患者被随机分配到训练队列(198 人)和测试队列(60 人)。通过在图像的最大水平上提供感兴趣体(VOI)的三维(3D)特征,从每位患者的 VOI 中提取了 4238 个放射学特征。采用 Wilcoxon-Mann-Whitney(WMW)检验、最小绝对收缩和选择算子(LASSO)以及支持向量机(SVM)来选择放射学特征。五种机器学习(ML)算法被用于构建三类诊断模型。采用一出交叉验证(LOOCV)来评估放射组学模型的性能。测试队列用于验证放射组学模型的泛化能力。接受者操作特征(ROC)用于评估放射组学模型的诊断性能。通过平均 ROC 曲线下面积(AUC)评估了五个模型的总体和分辨性能。平均 ROC 表明,随机森林(RF)诊断模型在训练队列中表现出最佳的诊断性能(微观/宏观平均 AUC,0.98/0.99),LOOCV(微观/宏观平均 AUC,0.89/0.88)和外部验证(测试队列)(微观/宏观平均 AUC,0.81/0.79)证实了这一点。我们提出的基于CT的放射组学诊断模型可有效帮助术前区分BeOT、BOT和eMOT。
{"title":"CT-Based Radiomics and Machine Learning for Differentiating Benign, Borderline, and Early-Stage Malignant Ovarian Tumors","authors":"Jia Chen, Lei Liu, Ziying He, Danke Su, Chanzhen Liu","doi":"10.1007/s10278-023-00903-z","DOIUrl":"https://doi.org/10.1007/s10278-023-00903-z","url":null,"abstract":"<p>To explore the value of CT-based radiomics model in the differential diagnosis of benign ovarian tumors (BeOTs), borderline ovarian tumors (BOTs), and early malignant ovarian tumors (eMOTs). The retrospective research was conducted with pathologically confirmed 258 ovarian tumor patients from January 2014 to February 2021. The patients were randomly allocated to a training cohort (<i>n</i> = 198) and a test cohort (<i>n</i> = 60). By providing a three-dimensional (3D) characterization of the volume of interest (VOI) at the maximum level of images, 4238 radiomic features were extracted from the VOI per patient. The Wilcoxon–Mann–Whitney (WMW) test, least absolute shrinkage and selection operator (LASSO), and support vector machine (SVM) were employed to select the radiomic features. Five machine learning (ML) algorithms were applied to construct three-class diagnostic models. Leave-one-out cross-validation (LOOCV) was implemented to evaluate the performance of the radiomics models. The test cohort was used to verify the generalization ability of the radiomics models. The receiver-operating characteristic (ROC) was used to evaluate diagnostic performance of radiomics model. Global and discrimination performance of five models was evaluated by average area under the ROC curve (AUC). The average ROC indicated that random forest (RF) diagnostic model in training cohort demonstrated the best diagnostic performance (micro/macro average AUC, 0.98/0.99), which was then confirmed with by LOOCV (micro/macro average AUC, 0.89/0.88) and external validation (test cohort) (micro/macro average AUC, 0.81/0.79). Our proposed CT-based radiomics diagnostic models may effectively assist in preoperatively differentiating BeOTs, BOTs, and eMOTs.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"5 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Invertible and Variable Augmented Network for Pretreatment Patient-Specific Quality Assurance Dose Prediction 用于治疗前特定患者质量保证剂量预测的不可变增强网络
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00930-w
Zhongsheng Zou, Changfei Gong, Lingpeng Zeng, Yu Guan, Bin Huang, Xiuwen Yu, Qiegen Liu, Minghui Zhang

Pretreatment patient-specific quality assurance (prePSQA) is conducted to confirm the accuracy of the radiotherapy dose delivered. However, the process of prePSQA measurement is time consuming and exacerbates the workload for medical physicists. The purpose of this work is to propose a novel deep learning (DL) network to improve the accuracy and efficiency of prePSQA. A modified invertible and variable augmented network was developed to predict the three-dimensional (3D) measurement-guided dose (MDose) distribution of 300 cancer patients who underwent volumetric modulated arc therapy (VMAT) between 2018 and 2021, in which 240 cases were randomly selected for training, and 60 for testing. For simplicity, the present approach was termed as “IVPSQA.” The input data include CT images, radiotherapy dose exported from the treatment planning system, and MDose distribution extracted from the verification system. Adam algorithm was used for first-order gradient-based optimization of stochastic objective functions. The IVPSQA model obtained high-quality 3D prePSQA dose distribution maps in head and neck, chest, and abdomen cases, and outperformed the existing U-Net-based prediction approaches in terms of dose difference maps and horizontal profiles comparison. Moreover, quantitative evaluation metrics including SSIM, MSE, and MAE demonstrated that the proposed approach achieved a good agreement with ground truth and yield promising gains over other advanced methods. This study presented the first work on predicting 3D prePSQA dose distribution by using the IVPSQA model. The proposed method could be taken as a clinical guidance tool and help medical physicists to reduce the measurement work of prePSQA.

进行治疗前患者特定质量保证(prePSQA)是为了确认放疗剂量的准确性。然而,预PSQA测量过程耗时,加重了医学物理学家的工作量。这项工作的目的是提出一种新型深度学习(DL)网络,以提高预PSQA的准确性和效率。研究人员开发了一种改进的可逆和可变增强网络,用于预测 2018 年至 2021 年间接受容积调制弧治疗(VMAT)的 300 例癌症患者的三维(3D)测量引导剂量(MDose)分布,其中随机选取 240 例进行训练,60 例进行测试。为简单起见,本方法被称为 "IVPSQA"。输入数据包括 CT 图像、治疗计划系统导出的放疗剂量和验证系统提取的 MDose 分布。亚当算法用于随机目标函数的一阶梯度优化。IVPSQA 模型在头颈部、胸部和腹部病例中获得了高质量的 3D prePSQA 剂量分布图,在剂量差图和水平剖面比较方面优于现有的基于 U-Net 的预测方法。此外,包括 SSIM、MSE 和 MAE 在内的定量评估指标表明,所提出的方法与地面实况具有很好的一致性,与其他先进方法相比具有很好的收益。这项研究首次提出了利用 IVPSQA 模型预测 3D prePSQA 剂量分布的方法。所提出的方法可作为临床指导工具,帮助医学物理学家减少预PSQA的测量工作。
{"title":"Invertible and Variable Augmented Network for Pretreatment Patient-Specific Quality Assurance Dose Prediction","authors":"Zhongsheng Zou, Changfei Gong, Lingpeng Zeng, Yu Guan, Bin Huang, Xiuwen Yu, Qiegen Liu, Minghui Zhang","doi":"10.1007/s10278-023-00930-w","DOIUrl":"https://doi.org/10.1007/s10278-023-00930-w","url":null,"abstract":"<p>Pretreatment patient-specific quality assurance (prePSQA) is conducted to confirm the accuracy of the radiotherapy dose delivered. However, the process of prePSQA measurement is time consuming and exacerbates the workload for medical physicists. The purpose of this work is to propose a novel deep learning (DL) network to improve the accuracy and efficiency of prePSQA. A modified invertible and variable augmented network was developed to predict the three-dimensional (3D) measurement-guided dose (MDose) distribution of 300 cancer patients who underwent volumetric modulated arc therapy (VMAT) between 2018 and 2021, in which 240 cases were randomly selected for training, and 60 for testing. For simplicity, the present approach was termed as “IVPSQA.” The input data include CT images, radiotherapy dose exported from the treatment planning system, and MDose distribution extracted from the verification system. Adam algorithm was used for first-order gradient-based optimization of stochastic objective functions. The IVPSQA model obtained high-quality 3D prePSQA dose distribution maps in head and neck, chest, and abdomen cases, and outperformed the existing U-Net-based prediction approaches in terms of dose difference maps and horizontal profiles comparison. Moreover, quantitative evaluation metrics including SSIM, MSE, and MAE demonstrated that the proposed approach achieved a good agreement with ground truth and yield promising gains over other advanced methods. This study presented the first work on predicting 3D prePSQA dose distribution by using the IVPSQA model. The proposed method could be taken as a clinical guidance tool and help medical physicists to reduce the measurement work of prePSQA.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"7 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating PET Attenuation Maps via Sim2Real Deep Learning–Based Tissue Composition Estimation Combined with MLACF 通过基于 Sim2Real 深度学习的组织成分估计结合 MLACF 生成 PET 衰减图
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00902-0
Tetsuya Kobayashi, Yui Shigeki, Yoshiyuki Yamakawa, Yumi Tsutsumida, Tetsuro Mizuta, Kohei Hanaoka, Shota Watanabe, Daisuke Morimoto‑Ishikawa, Takahiro Yamada, Hayato Kaida, Kazunari Ishii

Deep learning (DL) has recently attracted attention for data processing in positron emission tomography (PET). Attenuation correction (AC) without computed tomography (CT) data is one of the interests. Here, we present, to our knowledge, the first attempt to generate an attenuation map of the human head via Sim2Real DL-based tissue composition estimation from model training using only the simulated PET dataset. The DL model accepts a two-dimensional non-attenuation-corrected PET image as input and outputs a four-channel tissue-composition map of soft tissue, bone, cavity, and background. Then, an attenuation map is generated by a linear combination of the tissue composition maps and, finally, used as input for scatter+random estimation and as an initial estimate for attenuation map reconstruction by the maximum likelihood attenuation correction factor (MLACF), i.e., the DL estimate is refined by the MLACF. Preliminary results using clinical brain PET data showed that the proposed DL model tended to estimate anatomical details inaccurately, especially in the neck-side slices. However, it succeeded in estimating overall anatomical structures, and the PET quantitative accuracy with DL-based AC was comparable to that with CT-based AC. Thus, the proposed DL-based approach combined with the MLACF is also a promising CT-less AC approach.

最近,深度学习(DL)在正电子发射断层扫描(PET)数据处理方面引起了关注。不使用计算机断层扫描(CT)数据进行衰减校正(AC)是其中一个关注点。据我们所知,我们首次尝试通过 Sim2Real DL 基于模型训练的组织成分估计,仅使用模拟 PET 数据集生成人体头部的衰减图。DL 模型接受二维非衰减校正 PET 图像作为输入,并输出软组织、骨骼、空腔和背景的四通道组织构成图。然后,通过组织成分图的线性组合生成衰减图,最后作为散射+随机估计的输入,并作为最大似然衰减校正因子(MLACF)重建衰减图的初始估计值,即 DL 估计值由 MLACF 精炼。使用临床脑 PET 数据得出的初步结果显示,所提出的 DL 模型对解剖细节的估计往往不准确,尤其是在颈侧切片中。但是,它成功地估计了整体解剖结构,基于 DL 的 AC 的 PET 定量准确性与基于 CT 的 AC 相当。因此,建议的基于 DL 的方法与 MLACF 相结合也是一种很有前景的无 CT AC 方法。
{"title":"Generating PET Attenuation Maps via Sim2Real Deep Learning–Based Tissue Composition Estimation Combined with MLACF","authors":"Tetsuya Kobayashi, Yui Shigeki, Yoshiyuki Yamakawa, Yumi Tsutsumida, Tetsuro Mizuta, Kohei Hanaoka, Shota Watanabe, Daisuke Morimoto‑Ishikawa, Takahiro Yamada, Hayato Kaida, Kazunari Ishii","doi":"10.1007/s10278-023-00902-0","DOIUrl":"https://doi.org/10.1007/s10278-023-00902-0","url":null,"abstract":"<p>Deep learning (DL) has recently attracted attention for data processing in positron emission tomography (PET). Attenuation correction (AC) without computed tomography (CT) data is one of the interests. Here, we present, to our knowledge, the first attempt to generate an attenuation map of the human head via Sim2Real DL-based tissue composition estimation from model training using only the simulated PET dataset. The DL model accepts a two-dimensional non-attenuation-corrected PET image as input and outputs a four-channel tissue-composition map of soft tissue, bone, cavity, and background. Then, an attenuation map is generated by a linear combination of the tissue composition maps and, finally, used as input for scatter+random estimation and as an initial estimate for attenuation map reconstruction by the maximum likelihood attenuation correction factor (MLACF), i.e., the DL estimate is refined by the MLACF. Preliminary results using clinical brain PET data showed that the proposed DL model tended to estimate anatomical details inaccurately, especially in the neck-side slices. However, it succeeded in estimating overall anatomical structures, and the PET quantitative accuracy with DL-based AC was comparable to that with CT-based AC. Thus, the proposed DL-based approach combined with the MLACF is also a promising CT-less AC approach.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"322 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning-Based Multiparametric Magnetic Resonance Imaging Radiomics Model for Preoperative Predicting the Deep Stromal Invasion in Patients with Early Cervical Cancer 基于机器学习的多参数磁共振成像放射组学模型用于术前预测早期宫颈癌患者的深层间质侵犯情况
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00906-w
Haowen Yan, Gaoting Huang, Zhihe Yang, Yirong Chen, Zhiming Xiang

Deep stromal invasion is an important pathological factor associated with the treatments and prognosis of cervical cancer patients. Accurate determination of deep stromal invasion before radical hysterectomy (RH) is of great value for early clinical treatment decision-making and improving the prognosis of these patients. Machine learning is gradually applied in the construction of clinical models to improve the accuracy of clinical diagnosis or prediction, but whether machine learning can improve the preoperative diagnosis accuracy of deep stromal invasion in patients with cervical cancer was still unclear. This cross-sectional study was to construct three preoperative diagnostic models for deep stromal invasion in patients with early cervical cancer based on clinical, radiomics, and clinical combined radiomics data using the machine learning method. We enrolled 229 patients with early cervical cancer receiving RH combined with pelvic lymph node dissection (PLND). The least absolute shrinkage and selection operator (LASSO) and the fivefold cross-validation were applied to screen out radiomics features. Univariate and multivariate logistic regression analyses were applied to identify clinical predictors. All subjects were divided into the training set (n = 160) and testing set (n = 69) at a ratio of 7:3. Three light gradient boosting machine (LightGBM) models were constructed in the training set and verified in the testing set. The radiomics features were statistically different between deep stromal invasion < 1/3 group and deep stromal invasion ≥ 1/3 group. In the training set, the area under the curve (AUC) of the prediction model based on radiomics features was 0.951 (95% confidence interval (CI) 0.922–0.980), the AUC of the prediction model based on clinical predictors was 0.769 (95% CI 0.703–0.835), and the AUC of the prediction model based on radiomics features and clinical predictors was 0.969 (95% CI 0.947–0.990). The AUC of the prediction model based on radiomics features and clinical predictors was 0.914 (95% CI 0.848–0.980) in the testing set. The prediction model for deep stromal invasion in patients with early cervical cancer based on clinical and radiomics data exhibited good predictive performance with an AUC of 0.969, which might help the clinicians early identify patients with high risk of deep stromal invasion and provide timely interventions.

深层间质侵犯是与宫颈癌患者的治疗和预后相关的重要病理因素。在根治性子宫切除术(RH)前对深部间质浸润进行准确判断,对早期临床治疗决策和改善这些患者的预后具有重要价值。机器学习逐渐被应用于临床模型的构建,以提高临床诊断或预测的准确性,但机器学习能否提高宫颈癌患者术前深部间质侵犯的诊断准确性仍不清楚。本横断面研究旨在基于临床、放射组学和临床联合放射组学数据,利用机器学习方法构建三种早期宫颈癌患者深层间质浸润的术前诊断模型。我们招募了229名接受RH联合盆腔淋巴结清扫术(PLND)的早期宫颈癌患者。应用最小绝对收缩和选择算子(LASSO)和五倍交叉验证筛选出放射组学特征。应用单变量和多变量逻辑回归分析确定临床预测因素。所有受试者按 7:3 的比例分为训练集(n = 160)和测试集(n = 69)。在训练集中构建了三个光梯度增强机(LightGBM)模型,并在测试集中进行了验证。深基质侵犯< 1/3组和深基质侵犯≥1/3组的放射组学特征有统计学差异。在训练集中,基于放射组学特征的预测模型的曲线下面积(AUC)为 0.951(95% 置信区间(CI)0.922-0.980),基于临床预测因子的预测模型的曲线下面积(AUC)为 0.769(95% CI 0.703-0.835),基于放射组学特征和临床预测因子的预测模型的曲线下面积(AUC)为 0.969(95% CI 0.947-0.990)。在测试集中,基于放射组学特征和临床预测因子的预测模型的AUC为0.914(95% CI 0.848-0.980)。基于临床和放射组学数据的早期宫颈癌患者深部间质侵犯预测模型显示出良好的预测性能,AUC为0.969,这可能有助于临床医生早期识别深部间质侵犯高风险患者并提供及时干预。
{"title":"Machine Learning-Based Multiparametric Magnetic Resonance Imaging Radiomics Model for Preoperative Predicting the Deep Stromal Invasion in Patients with Early Cervical Cancer","authors":"Haowen Yan, Gaoting Huang, Zhihe Yang, Yirong Chen, Zhiming Xiang","doi":"10.1007/s10278-023-00906-w","DOIUrl":"https://doi.org/10.1007/s10278-023-00906-w","url":null,"abstract":"<p>Deep stromal invasion is an important pathological factor associated with the treatments and prognosis of cervical cancer patients. Accurate determination of deep stromal invasion before radical hysterectomy (RH) is of great value for early clinical treatment decision-making and improving the prognosis of these patients. Machine learning is gradually applied in the construction of clinical models to improve the accuracy of clinical diagnosis or prediction, but whether machine learning can improve the preoperative diagnosis accuracy of deep stromal invasion in patients with cervical cancer was still unclear. This cross-sectional study was to construct three preoperative diagnostic models for deep stromal invasion in patients with early cervical cancer based on clinical, radiomics, and clinical combined radiomics data using the machine learning method. We enrolled 229 patients with early cervical cancer receiving RH combined with pelvic lymph node dissection (PLND). The least absolute shrinkage and selection operator (LASSO) and the fivefold cross-validation were applied to screen out radiomics features. Univariate and multivariate logistic regression analyses were applied to identify clinical predictors. All subjects were divided into the training set (<i>n</i> = 160) and testing set (<i>n</i> = 69) at a ratio of 7:3. Three light gradient boosting machine (LightGBM) models were constructed in the training set and verified in the testing set. The radiomics features were statistically different between deep stromal invasion &lt; 1/3 group and deep stromal invasion ≥ 1/3 group. In the training set, the area under the curve (AUC) of the prediction model based on radiomics features was 0.951 (95% confidence interval (CI) 0.922–0.980), the AUC of the prediction model based on clinical predictors was 0.769 (95% CI 0.703–0.835), and the AUC of the prediction model based on radiomics features and clinical predictors was 0.969 (95% CI 0.947–0.990). The AUC of the prediction model based on radiomics features and clinical predictors was 0.914 (95% CI 0.848–0.980) in the testing set. The prediction model for deep stromal invasion in patients with early cervical cancer based on clinical and radiomics data exhibited good predictive performance with an AUC of 0.969, which might help the clinicians early identify patients with high risk of deep stromal invasion and provide timely interventions.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"7 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deconvolution-Based Pharmacokinetic Analysis to Improve the Prediction of Pathological Information of Breast Cancer 基于解卷积的药代动力学分析改进乳腺癌病理信息预测
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00915-9
Liangliang Zhang, Ming Fan, Lihua Li

Pharmacokinetic (PK) parameters, revealing changes in the tumor microenvironment, are related to the pathological information of breast cancer. Tracer kinetic models (e.g., Tofts-Kety model) with a nonlinear least square solver are commonly used to estimate PK parameters. However, the method is sensitive to noise in images. To relieve the effects of noise, a deconvolution (DEC) method, which was validated on synthetic concentration–time series, was proposed to accurately calculate PK parameters from breast dynamic contrast-enhanced magnetic resonance imaging. A time-to-peak-based tumor partitioning method was used to divide the whole tumor into three tumor subregions with different kinetic patterns. Radiomic features were calculated from the tumor subregion and whole tumor-based PK parameter maps. The optimal features determined by the fivefold cross-validation method were used to build random forest classifiers to predict molecular subtypes, Ki-67, and tumor grade. The diagnostic performance evaluated by the area under the receiver operating characteristic curve (AUC) was compared between the subregion and whole tumor-based PK parameters. The results showed that the DEC method obtained more accurate PK parameters than the Tofts method. Moreover, the results showed that the subregion-based Ktrans (best AUCs = 0.8319, 0.7032, 0.7132, 0.7490, 0.8074, and 0.6950) achieved a better diagnostic performance than the whole tumor-based Ktrans (AUCs = 0.8222, 0.6970, 0.6511, 0.7109, 0.7620, and 0.5894) for molecular subtypes, Ki-67, and tumor grade. These findings indicate that DEC-based Ktrans in the subregion has the potential to accurately predict molecular subtypes, Ki-67, and tumor grade.

药物动力学(PK)参数揭示了肿瘤微环境的变化,与乳腺癌的病理信息息息相关。带有非线性最小平方求解器的示踪动力学模型(如 Tofts-Kety 模型)通常用于估算 PK 参数。然而,这种方法对图像中的噪声很敏感。为了减轻噪声的影响,有人提出了一种去卷积(DEC)方法,并在合成浓度-时间序列上进行了验证,以准确计算乳腺动态对比增强磁共振成像的 PK 参数。采用基于时间-峰值的肿瘤分区方法,将整个肿瘤分为三个具有不同动力学模式的肿瘤亚区。根据肿瘤亚区和整个肿瘤的 PK 参数图计算出放射组学特征。用五倍交叉验证法确定的最佳特征建立随机森林分类器,预测分子亚型、Ki-67和肿瘤分级。通过接收者操作特征曲线下面积(AUC)评估诊断性能,比较了基于亚区域和整个肿瘤的 PK 参数。结果表明,DEC方法比Tofts方法获得的PK参数更准确。此外,结果显示,基于亚区域的 Ktrans(最佳 AUC = 0.8319、0.7032、0.7132、0.7490、0.8074 和 0.6950)在分子亚型、Ki-67 和肿瘤分级方面的诊断性能优于基于整个肿瘤的 Ktrans(AUC = 0.8222、0.6970、0.6511、0.7109、0.7620 和 0.5894)。这些发现表明,基于 DEC 的亚区域 Ktrans 有可能准确预测分子亚型、Ki-67 和肿瘤分级。
{"title":"Deconvolution-Based Pharmacokinetic Analysis to Improve the Prediction of Pathological Information of Breast Cancer","authors":"Liangliang Zhang, Ming Fan, Lihua Li","doi":"10.1007/s10278-023-00915-9","DOIUrl":"https://doi.org/10.1007/s10278-023-00915-9","url":null,"abstract":"<p>Pharmacokinetic (PK) parameters, revealing changes in the tumor microenvironment, are related to the pathological information of breast cancer. Tracer kinetic models (e.g., Tofts-Kety model) with a nonlinear least square solver are commonly used to estimate PK parameters. However, the method is sensitive to noise in images. To relieve the effects of noise, a deconvolution (DEC) method, which was validated on synthetic concentration–time series, was proposed to accurately calculate PK parameters from breast dynamic contrast-enhanced magnetic resonance imaging. A time-to-peak-based tumor partitioning method was used to divide the whole tumor into three tumor subregions with different kinetic patterns. Radiomic features were calculated from the tumor subregion and whole tumor-based PK parameter maps. The optimal features determined by the fivefold cross-validation method were used to build random forest classifiers to predict molecular subtypes, Ki-67, and tumor grade. The diagnostic performance evaluated by the area under the receiver operating characteristic curve (AUC) was compared between the subregion and whole tumor-based PK parameters. The results showed that the DEC method obtained more accurate PK parameters than the Tofts method. Moreover, the results showed that the subregion-based <i>K</i><sup>trans</sup> (best AUCs = 0.8319, 0.7032, 0.7132, 0.7490, 0.8074, and 0.6950) achieved a better diagnostic performance than the whole tumor-based <i>K</i><sup>trans</sup> (AUCs = 0.8222, 0.6970, 0.6511, 0.7109, 0.7620, and 0.5894) for molecular subtypes, Ki-67, and tumor grade. These findings indicate that DEC-based <i>K</i><sup>trans</sup> in the subregion has the potential to accurately predict molecular subtypes, Ki-67, and tumor grade.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"269 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Quantification of Total Cerebral Blood Flow from Phase-Contrast MRI and Deep Learning 通过相位对比 MRI 和深度学习自动量化大脑总血流量
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00948-0
Jinwon Kim, Hyebin Lee, Sung Suk Oh, Jinhee Jang, Hyunyeol Lee

Knowledge of input blood to the brain, which is represented as total cerebral blood flow (tCBF), is important in evaluating brain health. Phase-contrast (PC) magnetic resonance imaging (MRI) enables blood velocity mapping, allowing for noninvasive measurements of tCBF. In the procedure, manual selection of brain-feeding arteries is an essential step, but is time-consuming and often subjective. Thus, the purpose of this work was to develop and validate a deep learning (DL)-based technique for automated tCBF quantifications. To enhance the DL segmentation performance on arterial blood vessels, in the preprocessing step magnitude and phase images of PC MRI were multiplied several times. Thereafter, a U-Net was trained on 218 images for three-class segmentation. Network performance was evaluated in terms of the Dice coefficient and the intersection-over-union (IoU) on 40 test images, and additionally, on externally acquired 20 datasets. Finally, tCBF was calculated from the DL-predicted vessel segmentation maps, and its accuracy was statistically assessed with the correlation of determination (R2), the intraclass correlation coefficient (ICC), paired t-tests, and Bland-Altman analysis, in comparison to manually derived values. Overall, the DL segmentation network provided accurate labeling of arterial blood vessels for both internal (Dice=0.92, IoU=0.86) and external (Dice=0.90, IoU=0.82) tests. Furthermore, statistical analyses for tCBF estimates revealed good agreement between automated versus manual quantifications in both internal (R2=0.85, ICC=0.91, p=0.52) and external (R2=0.88, ICC=0.93, p=0.88) test groups. The results suggest feasibility of a simple and automated protocol for quantifying tCBF from neck PC MRI and deep learning.

大脑输入血液的知识,即脑总血流量(tCBF),对于评估大脑健康状况非常重要。相位对比(PC)磁共振成像(MRI)可以绘制血流速度图,从而对 tCBF 进行无创测量。在这一过程中,手动选择供脑动脉是必不可少的步骤,但这一步骤耗时且往往具有主观性。因此,这项工作的目的是开发和验证一种基于深度学习(DL)的自动 tCBF 定量技术。为了提高深度学习对动脉血管的分割性能,在预处理步骤中,将 PC MRI 的幅值和相位图像乘以数倍。之后,在 218 幅图像上训练 U-Net 进行三类分割。在 40 张测试图像和从外部获取的 20 个数据集上,根据 Dice 系数和交叉-过合(IoU)对网络性能进行了评估。最后,根据 DL 预测的血管分割图计算出 tCBF,并通过判定相关系数 (R2)、类内相关系数 (ICC)、配对 t 检验和 Bland-Altman 分析与人工得出的值进行比较,对其准确性进行统计评估。总体而言,DL分割网络能在内部(Dice=0.92,IoU=0.86)和外部(Dice=0.90,IoU=0.82)测试中准确标记动脉血管。此外,对 tCBF 估计值的统计分析显示,在内部(R2=0.85,ICC=0.91,p=0.52)和外部(R2=0.88,ICC=0.93,p=0.88)测试组中,自动量化与人工量化之间的一致性良好。结果表明,通过颈部 PC MRI 和深度学习量化 tCBF 的简单自动化方案是可行的。
{"title":"Automated Quantification of Total Cerebral Blood Flow from Phase-Contrast MRI and Deep Learning","authors":"Jinwon Kim, Hyebin Lee, Sung Suk Oh, Jinhee Jang, Hyunyeol Lee","doi":"10.1007/s10278-023-00948-0","DOIUrl":"https://doi.org/10.1007/s10278-023-00948-0","url":null,"abstract":"<p>Knowledge of input blood to the brain, which is represented as total cerebral blood flow (tCBF), is important in evaluating brain health. Phase-contrast (PC) magnetic resonance imaging (MRI) enables blood velocity mapping, allowing for noninvasive measurements of tCBF. In the procedure, manual selection of brain-feeding arteries is an essential step, but is time-consuming and often subjective. Thus, the purpose of this work was to develop and validate a deep learning (DL)-based technique for automated tCBF quantifications. To enhance the DL segmentation performance on arterial blood vessels, in the preprocessing step magnitude and phase images of PC MRI were multiplied several times. Thereafter, a U-Net was trained on 218 images for three-class segmentation. Network performance was evaluated in terms of the Dice coefficient and the intersection-over-union (IoU) on 40 test images, and additionally, on externally acquired 20 datasets. Finally, tCBF was calculated from the DL-predicted vessel segmentation maps, and its accuracy was statistically assessed with the correlation of determination (R<sup>2</sup>), the intraclass correlation coefficient (ICC), paired t-tests, and Bland-Altman analysis, in comparison to manually derived values. Overall, the DL segmentation network provided accurate labeling of arterial blood vessels for both internal (Dice=0.92, IoU=0.86) and external (Dice=0.90, IoU=0.82) tests. Furthermore, statistical analyses for tCBF estimates revealed good agreement between automated versus manual quantifications in both internal (R<sup>2</sup>=0.85, ICC=0.91, p=0.52) and external (R<sup>2</sup>=0.88, ICC=0.93, p=0.88) test groups. The results suggest feasibility of a simple and automated protocol for quantifying tCBF from neck PC MRI and deep learning.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"62 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAA-SDM: Neural Networks Faster Learned to Segment Organ Images SAA-SDM:神经网络更快学会器官图像分割
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00947-1

Abstract

In the field of medicine, rapidly and accurately segmenting organs in medical images is a crucial application of computer technology. This paper introduces a feature map module, Strength Attention Area Signed Distance Map (SAA-SDM), based on the principal component analysis (PCA) principle. The module is designed to accelerate neural networks’ convergence speed in rapidly achieving high precision. SAA-SDM provides the neural network with confidence information regarding the target and background, similar to the signed distance map (SDM), thereby enhancing the network’s understanding of semantic information related to the target. Furthermore, this paper presents a training scheme tailored for the module, aiming to achieve finer segmentation and improved generalization performance. Validation of our approach is carried out using TRUS and chest X-ray datasets. Experimental results demonstrate that our method significantly enhances neural networks’ convergence speed and precision. For instance, the convergence speed of UNet and UNET + + is improved by more than 30%. Moreover, Segformer achieves an increase of over 6% and 3% in mIoU (mean Intersection over Union) on two test datasets without requiring pre-trained parameters. Our approach reduces the time and resource costs associated with training neural networks for organ segmentation tasks while effectively guiding the network to achieve meaningful learning even without pre-trained parameters. 

摘要 在医学领域,快速准确地分割医学图像中的器官是计算机技术的一项重要应用。本文介绍了一种基于主成分分析(PCA)原理的特征图模块--强度注意区域符号距离图(SAA-SDM)。该模块旨在加快神经网络的收敛速度,快速实现高精度。SAA-SDM 为神经网络提供了目标和背景的置信度信息,类似于签名距离图(SDM),从而增强了神经网络对目标相关语义信息的理解。此外,本文还介绍了一种为该模块量身定制的训练方案,旨在实现更精细的分割并提高泛化性能。我们使用 TRUS 和胸部 X 光数据集对我们的方法进行了验证。实验结果表明,我们的方法显著提高了神经网络的收敛速度和精度。例如,UNET 和 UNET + + 的收敛速度提高了 30% 以上。此外,Segformer 在两个测试数据集上的 mIoU(平均交集大于联合)分别提高了 6% 和 3%,而无需预先训练参数。我们的方法减少了为器官分割任务训练神经网络所需的时间和资源成本,同时即使没有预先训练的参数,也能有效指导网络实现有意义的学习。
{"title":"SAA-SDM: Neural Networks Faster Learned to Segment Organ Images","authors":"","doi":"10.1007/s10278-023-00947-1","DOIUrl":"https://doi.org/10.1007/s10278-023-00947-1","url":null,"abstract":"<h3>Abstract</h3> <p>In the field of medicine, rapidly and accurately segmenting organs in medical images is a crucial application of computer technology. This paper introduces a feature map module, Strength Attention Area Signed Distance Map (SAA-SDM), based on the principal component analysis (PCA) principle. The module is designed to accelerate neural networks’ convergence speed in rapidly achieving high precision. SAA-SDM provides the neural network with confidence information regarding the target and background, similar to the signed distance map (SDM), thereby enhancing the network’s understanding of semantic information related to the target. Furthermore, this paper presents a training scheme tailored for the module, aiming to achieve finer segmentation and improved generalization performance. Validation of our approach is carried out using TRUS and chest X-ray datasets. Experimental results demonstrate that our method significantly enhances neural networks’ convergence speed and precision. For instance, the convergence speed of UNet and UNET + + is improved by more than 30%. Moreover, Segformer achieves an increase of over 6% and 3% in mIoU (mean Intersection over Union) on two test datasets without requiring pre-trained parameters. Our approach reduces the time and resource costs associated with training neural networks for organ segmentation tasks while effectively guiding the network to achieve meaningful learning even without pre-trained parameters. </p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"82 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of Machine Learning to Ultrasonography in Identifying Anatomical Landmarks for Cricothyroidotomy Among Female Adults: A Multi-center Prospective Observational Study 将机器学习应用于超声造影以识别女性成人环甲膜切开术的解剖标志:一项多中心前瞻性观察研究
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00929-3
Chih-Hung Wang, Jia-Da Li, Cheng-Yi Wu, Yu-Chen Wu, Joyce Tay, Meng-Che Wu, Ching-Hang Hsu, Yi-Kuan Liu, Chu-Song Chen, Chien-Hua Huang

We aimed to develop machine learning (ML)-based algorithms to assist physicians in ultrasound-guided localization of cricoid cartilage (CC) and thyroid cartilage (TC) in cricothyroidotomy. Adult female volunteers were prospectively recruited from two hospitals between September and December, 2020. Ultrasonographic images were collected via a modified longitudinal technique. You Only Look Once (YOLOv5s), Faster Regions with Convolutional Neural Network features (Faster R-CNN), and Single Shot Detector (SSD) were selected as the model architectures. A total of 488 women (mean age: 36.0 years) participated in the study, contributing to a total of 292,053 frames of ultrasonographic images. The derived ML-based algorithms demonstrated excellent discriminative performance for the presence of CC (area under the receiver operating characteristic curve [AUC]: YOLOv5s, 0.989, 95% confidence interval [CI]: 0.982–0.994; Faster R-CNN, 0.986, 95% CI: 0.980–0.991; SSD, 0.968, 95% CI: 0.956–0.977) and TC (AUC: YOLOv5s, 0.989, 95% CI: 0.977–0.997; Faster R-CNN, 0.981, 95% CI: 0.965–0.991; SSD, 0.982, 95% CI: 0.973–0.990). Furthermore, in the frames where the model could correctly indicate the presence of CC or TC, it also accurately localized CC (intersection-over-union: YOLOv5s, 0.753, 95% CI: 0.739–0.765; Faster R-CNN, 0.720, 95% CI: 0.709–0.732; SSD, 0.739, 95% CI: 0.726–0.751) or TC (intersection-over-union: YOLOv5s, 0.739, 95% CI: 0.722–0.755; Faster R-CNN, 0.709, 95% CI: 0.687–0.730; SSD, 0.713, 95% CI: 0.695–0.730). The ML-based algorithms could identify anatomical landmarks for cricothyroidotomy in adult females with favorable discriminative and localization performance. Further studies are warranted to transfer this algorithm to hand-held portable ultrasound devices for clinical use.

我们旨在开发基于机器学习(ML)的算法,以协助医生在环甲膜切开术中在超声引导下定位环状软骨(CC)和甲状软骨(TC)。2020 年 9 月至 12 月期间,两家医院前瞻性地招募了成年女性志愿者。通过改良的纵向技术收集超声图像。选择 "只看一次"(YOLOv5s)、具有卷积神经网络特征的更快区域(Faster R-CNN)和单次检测器(SSD)作为模型架构。共有 488 名女性(平均年龄:36.0 岁)参与了这项研究,共获得 292,053 帧超声波图像。得出的基于 ML 的算法对 CC 的存在具有极佳的判别性能(接收者操作特征曲线下面积 [AUC]:YOLOv5s,0.0%;YOLOv5s,0.0%;YOLOv5s,0.0%):YOLOv5s,0.989,95% 置信区间 [CI]:0.982-0.994;Faster R-CNN,0.986,95% 置信区间 [CI]:0.980-0.991;SSD,0.968,95% 置信区间 [CI]:0.956-0.977)和 TC(AUC:YOLOv5s,0.989,95% CI:0.977-0.997;Faster R-CNN,0.981,95% CI:0.965-0.991;SSD,0.982,95% CI:0.973-0.990)。此外,在模型能正确显示 CC 或 TC 存在的帧中,它还能准确定位 CC(intersection-over-union:YOLOv5s,0.753,95% CI:0.739-0.765;Faster R-CNN,0.720,95% CI:0.709-0.732;SSD,0.739,95% CI:0.726-0.751)或 TC(intersection-over-union:YOLOv5s,0.739,95% CI:0.722-0.755;Faster R-CNN,0.709,95% CI:0.687-0.730;SSD,0.713,95% CI:0.695-0.730)。基于 ML 的算法可以识别成年女性环甲膜切开术的解剖地标,具有良好的鉴别和定位性能。还需要进一步研究,以便将该算法应用于手持便携式超声设备,供临床使用。
{"title":"Application of Machine Learning to Ultrasonography in Identifying Anatomical Landmarks for Cricothyroidotomy Among Female Adults: A Multi-center Prospective Observational Study","authors":"Chih-Hung Wang, Jia-Da Li, Cheng-Yi Wu, Yu-Chen Wu, Joyce Tay, Meng-Che Wu, Ching-Hang Hsu, Yi-Kuan Liu, Chu-Song Chen, Chien-Hua Huang","doi":"10.1007/s10278-023-00929-3","DOIUrl":"https://doi.org/10.1007/s10278-023-00929-3","url":null,"abstract":"<p>We aimed to develop machine learning (ML)-based algorithms to assist physicians in ultrasound-guided localization of cricoid cartilage (CC) and thyroid cartilage (TC) in cricothyroidotomy. Adult female volunteers were prospectively recruited from two hospitals between September and December, 2020. Ultrasonographic images were collected via a modified longitudinal technique. You Only Look Once (YOLOv5s), Faster Regions with Convolutional Neural Network features (Faster R-CNN), and Single Shot Detector (SSD) were selected as the model architectures. A total of 488 women (mean age: 36.0 years) participated in the study, contributing to a total of 292,053 frames of ultrasonographic images. The derived ML-based algorithms demonstrated excellent discriminative performance for the presence of CC (area under the receiver operating characteristic curve [AUC]: YOLOv5s, 0.989, 95% confidence interval [CI]: 0.982–0.994; Faster R-CNN, 0.986, 95% CI: 0.980–0.991; SSD, 0.968, 95% CI: 0.956–0.977) and TC (AUC: YOLOv5s, 0.989, 95% CI: 0.977–0.997; Faster R-CNN, 0.981, 95% CI: 0.965–0.991; SSD, 0.982, 95% CI: 0.973–0.990). Furthermore, in the frames where the model could correctly indicate the presence of CC or TC, it also accurately localized CC (intersection-over-union: YOLOv5s, 0.753, 95% CI: 0.739–0.765; Faster R-CNN, 0.720, 95% CI: 0.709–0.732; SSD, 0.739, 95% CI: 0.726–0.751) or TC (intersection-over-union: YOLOv5s, 0.739, 95% CI: 0.722–0.755; Faster R-CNN, 0.709, 95% CI: 0.687–0.730; SSD, 0.713, 95% CI: 0.695–0.730). The ML-based algorithms could identify anatomical landmarks for cricothyroidotomy in adult females with favorable discriminative and localization performance. Further studies are warranted to transfer this algorithm to hand-held portable ultrasound devices for clinical use.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"8 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A 3D Sparse Autoencoder for Fully Automated Quality Control of Affine Registrations in Big Data Brain MRI Studies 用于大数据脑磁共振成像研究中仿射重置全自动质量控制的三维稀疏自动编码器
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00933-7
Venkata Sainath Gupta Thadikemalla, Niels K. Focke, Sudhakar Tummala

This paper presents a fully automated pipeline using a sparse convolutional autoencoder for quality control (QC) of affine registrations in large-scale T1-weighted (T1w) and T2-weighted (T2w) magnetic resonance imaging (MRI) studies. Here, a customized 3D convolutional encoder-decoder (autoencoder) framework is proposed and the network is trained in a fully unsupervised manner. For cross-validating the proposed model, we used 1000 correctly aligned MRI images of the human connectome project young adult (HCP-YA) dataset. We proposed that the quality of the registration is proportional to the reconstruction error of the autoencoder. Further, to make this method applicable to unseen datasets, we have proposed dataset-specific optimal threshold calculation (using the reconstruction error) from ROC analysis that requires a subset of the correctly aligned and artificially generated misalignments specific to that dataset. The calculated optimum threshold is used for testing the quality of remaining affine registrations from the corresponding datasets. The proposed framework was tested on four unseen datasets from autism brain imaging data exchange (ABIDE I, 215 subjects), information eXtraction from images (IXI, 577 subjects), Open Access Series of Imaging Studies (OASIS4, 646 subjects), and “Food and Brain” study (77 subjects). The framework has achieved excellent performance for T1w and T2w affine registrations with an accuracy of 100% for HCP-YA. Further, we evaluated the generality of the model on four unseen datasets and obtained accuracies of 81.81% for ABIDE I (only T1w), 93.45% (T1w) and 81.75% (T2w) for OASIS4, and 92.59% for “Food and Brain” study (only T1w) and in the range 88–97% for IXI (for both T1w and T2w and stratified concerning scanner vendor and magnetic field strengths). Moreover, the real failures from “Food and Brain” and OASIS4 datasets were detected with sensitivities of 100% and 80% for T1w and T2w, respectively. In addition, AUCs of > 0.88 in all scenarios were obtained during threshold calculation on the four test sets.

本文介绍了一种使用稀疏卷积自动编码器的全自动流水线,用于大规模 T1 加权(T1w)和 T2 加权(T2w)磁共振成像(MRI)研究中仿射注册的质量控制(QC)。本文提出了一个定制的三维卷积编码器-解码器(自动编码器)框架,并以完全无监督的方式对网络进行了训练。为了对所提出的模型进行交叉验证,我们使用了人类连接组项目年轻成人(HCP-YA)数据集中的 1000 张正确配准的 MRI 图像。我们提出,配准质量与自动编码器的重建误差成正比。此外,为了使这种方法适用于未见过的数据集,我们提出了通过 ROC 分析计算特定数据集的最佳阈值(使用重建误差),这需要特定于该数据集的正确配准和人为产生的错配的子集。计算出的最佳阈值用于测试相应数据集中剩余仿射注册的质量。在自闭症脑成像数据交换(ABIDE I,215 个受试者)、图像信息提取(IXI,577 个受试者)、成像研究开放存取系列(OASIS4,646 个受试者)和 "食物与大脑 "研究(77 个受试者)等四个未见数据集上测试了所提出的框架。该框架在 T1w 和 T2w 仿真注册方面表现出色,HCP-YA 的准确率达到 100%。此外,我们还在四个未见数据集上评估了模型的通用性,结果发现 ABIDE I(仅 T1w)的准确率为 81.81%,OASIS4 的准确率为 93.45%(T1w)和 81.75%(T2w),"食物与大脑 "研究的准确率为 92.59%(仅 T1w),IXI 的准确率在 88-97% 之间(T1w 和 T2w 均为准确率,并根据扫描仪供应商和磁场强度进行了分层)。此外,从 "食物与大脑 "和 OASIS4 数据集中检测到的真正故障,对 T1w 和 T2w 的灵敏度分别为 100%和 80%。此外,在对四个测试集进行阈值计算时,所有情况下的 AUC 都达到了 > 0.88。
{"title":"A 3D Sparse Autoencoder for Fully Automated Quality Control of Affine Registrations in Big Data Brain MRI Studies","authors":"Venkata Sainath Gupta Thadikemalla, Niels K. Focke, Sudhakar Tummala","doi":"10.1007/s10278-023-00933-7","DOIUrl":"https://doi.org/10.1007/s10278-023-00933-7","url":null,"abstract":"<p>This paper presents a fully automated pipeline using a sparse convolutional autoencoder for quality control (QC) of affine registrations in large-scale T1-weighted (T1w) and T2-weighted (T2w) magnetic resonance imaging (MRI) studies. Here, a customized 3D convolutional encoder-decoder (autoencoder) framework is proposed and the network is trained in a fully unsupervised manner. For cross-validating the proposed model, we used 1000 correctly aligned MRI images of the human connectome project young adult (HCP-YA) dataset. We proposed that the quality of the registration is proportional to the reconstruction error of the autoencoder. Further, to make this method applicable to unseen datasets, we have proposed dataset-specific optimal threshold calculation (using the reconstruction error) from ROC analysis that requires a subset of the correctly aligned and artificially generated misalignments specific to that dataset. The calculated optimum threshold is used for testing the quality of remaining affine registrations from the corresponding datasets. The proposed framework was tested on four unseen datasets from autism brain imaging data exchange (ABIDE I, 215 subjects), information eXtraction from images (IXI, 577 subjects), Open Access Series of Imaging Studies (OASIS4, 646 subjects), and “Food and Brain” study (77 subjects). The framework has achieved excellent performance for T1w and T2w affine registrations with an accuracy of 100% for HCP-YA. Further, we evaluated the generality of the model on four unseen datasets and obtained accuracies of 81.81% for ABIDE I (only T1w), 93.45% (T1w) and 81.75% (T2w) for OASIS4, and 92.59% for “Food and Brain” study (only T1w) and in the range 88–97% for IXI (for both T1w and T2w and stratified concerning scanner vendor and magnetic field strengths). Moreover, the real failures from “Food and Brain” and OASIS4 datasets were detected with sensitivities of 100% and 80% for T1w and T2w, respectively. In addition, AUCs of &gt; 0.88 in all scenarios were obtained during threshold calculation on the four test sets.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"298 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Learning-Based Approach for Cervical Cancer Classification Using 3D CNN and Vision Transformer 基于深度学习的宫颈癌分类方法(使用 3D CNN 和视觉变换器
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00911-z
Abinaya K., Sivakumar B.

Cervical cancer is a significant health problem worldwide, and early detection and treatment are critical to improving patient outcomes. To address this challenge, a deep learning (DL)-based cervical classification system is proposed using 3D convolutional neural network and Vision Transformer (ViT) module. The proposed model leverages the capability of 3D CNN to extract spatiotemporal features from cervical images and employs the ViT model to capture and learn complex feature representations. The model consists of an input layer that receives cervical images, followed by a 3D convolution block, which extracts features from the images. The feature maps generated are down-sampled using max-pooling block to eliminate redundant information and preserve important features. Four Vision Transformer models are employed to extract efficient feature maps of different levels of abstraction. The output of each Vision Transformer model is an efficient set of feature maps that captures spatiotemporal information at a specific level of abstraction. The feature maps generated by the Vision Transformer models are then supplied into the 3D feature pyramid network (FPN) module for feature concatenation. The 3D squeeze-and-excitation (SE) block is employed to obtain efficient feature maps that recalibrate the feature responses of the network based on the interdependencies between different feature maps, thereby improving the discriminative power of the model. At last, dimension minimization of feature maps is executed using 3D average pooling layer. Its output is then fed into a kernel extreme learning machine (KELM) for classification into one of the five classes. The KELM uses radial basis kernel function (RBF) for mapping features in high-dimensional feature space and classifying the input samples. The superiority of the proposed model is known using simulation results, achieving an accuracy of 98.6%, demonstrating its potential as an effective tool for cervical cancer classification. Also, it can be used as a diagnostic supportive tool to assist medical experts in accurately identifying cervical cancer in patients.

宫颈癌是全球范围内的重大健康问题,早期检测和治疗对改善患者预后至关重要。为了应对这一挑战,我们利用三维卷积神经网络和视觉转换器(ViT)模块,提出了一种基于深度学习(DL)的宫颈癌分类系统。该模型利用三维卷积神经网络的能力从宫颈图像中提取时空特征,并采用 ViT 模型捕捉和学习复杂的特征表征。该模型由接收宫颈图像的输入层和从图像中提取特征的三维卷积块组成。生成的特征图使用最大池化块进行下采样,以消除冗余信息并保留重要特征。四种视觉变换器模型用于提取不同抽象程度的高效特征图。每个视觉转换器模型的输出都是一组高效的特征图,可捕捉特定抽象层次的时空信息。然后,由视觉转换器模型生成的特征图将被输入三维特征金字塔网络(FPN)模块进行特征串联。三维挤压激励(SE)模块用于获取高效的特征图,根据不同特征图之间的相互依存关系重新校准网络的特征响应,从而提高模型的判别能力。最后,利用三维平均池化层对特征图进行维度最小化处理。然后,将其输出输入内核极端学习机(KELM),将其分类为五个类别之一。KELM 使用径向基核函数(RBF)在高维特征空间中映射特征,并对输入样本进行分类。模拟结果表明,该模型的准确率高达 98.6%,显示了其作为宫颈癌分类有效工具的潜力。此外,它还可用作辅助诊断工具,协助医学专家准确识别宫颈癌患者。
{"title":"A Deep Learning-Based Approach for Cervical Cancer Classification Using 3D CNN and Vision Transformer","authors":"Abinaya K., Sivakumar B.","doi":"10.1007/s10278-023-00911-z","DOIUrl":"https://doi.org/10.1007/s10278-023-00911-z","url":null,"abstract":"<p>Cervical cancer is a significant health problem worldwide, and early detection and treatment are critical to improving patient outcomes. To address this challenge, a deep learning (DL)-based cervical classification system is proposed using 3D convolutional neural network and Vision Transformer (ViT) module. The proposed model leverages the capability of 3D CNN to extract spatiotemporal features from cervical images and employs the ViT model to capture and learn complex feature representations. The model consists of an input layer that receives cervical images, followed by a 3D convolution block, which extracts features from the images. The feature maps generated are down-sampled using max-pooling block to eliminate redundant information and preserve important features. Four Vision Transformer models are employed to extract efficient feature maps of different levels of abstraction. The output of each Vision Transformer model is an efficient set of feature maps that captures spatiotemporal information at a specific level of abstraction. The feature maps generated by the Vision Transformer models are then supplied into the 3D feature pyramid network (FPN) module for feature concatenation. The 3D squeeze-and-excitation (SE) block is employed to obtain efficient feature maps that recalibrate the feature responses of the network based on the interdependencies between different feature maps, thereby improving the discriminative power of the model. At last, dimension minimization of feature maps is executed using 3D average pooling layer. Its output is then fed into a kernel extreme learning machine (KELM) for classification into one of the five classes. The KELM uses radial basis kernel function (RBF) for mapping features in high-dimensional feature space and classifying the input samples. The superiority of the proposed model is known using simulation results, achieving an accuracy of 98.6%, demonstrating its potential as an effective tool for cervical cancer classification. Also, it can be used as a diagnostic supportive tool to assist medical experts in accurately identifying cervical cancer in patients.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"21 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Digital Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1