首页 > 最新文献

Head and neck tumor segmentation and outcome prediction : second challenge, HECKTOR 2021, held in conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings. Head and Neck Tumor Segmentation Challenge (2nd : 2021 ...最新文献

英文 中文
Head and Neck Tumor Segmentation and Outcome Prediction: Third Challenge, HECKTOR 2022, Held in Conjunction with MICCAI 2022, Singapore, September 22, 2022, Proceedings 头颈部肿瘤分割和结果预测:第三次挑战,HECKTOR 2022,与MICCAI 2022一起举行,新加坡,2022年9月22日,会议记录
{"title":"Head and Neck Tumor Segmentation and Outcome Prediction: Third Challenge, HECKTOR 2022, Held in Conjunction with MICCAI 2022, Singapore, September 22, 2022, Proceedings","authors":"","doi":"10.1007/978-3-031-27420-6","DOIUrl":"https://doi.org/10.1007/978-3-031-27420-6","url":null,"abstract":"","PeriodicalId":93561,"journal":{"name":"Head and neck tumor segmentation and outcome prediction : second challenge, HECKTOR 2021, held in conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings. Head and Neck Tumor Segmentation Challenge (2nd : 2021 ...","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83592290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Head and Neck Tumor Segmentation and Outcome Prediction: Second Challenge, HECKTOR 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings 头颈部肿瘤分割和结果预测:第二次挑战,HECKTOR 2021,与MICCAI 2021一起举行,斯特拉斯堡,法国,2021年9月27日,论文集
{"title":"Head and Neck Tumor Segmentation and Outcome Prediction: Second Challenge, HECKTOR 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings","authors":"","doi":"10.1007/978-3-030-98253-9","DOIUrl":"https://doi.org/10.1007/978-3-030-98253-9","url":null,"abstract":"","PeriodicalId":93561,"journal":{"name":"Head and neck tumor segmentation and outcome prediction : second challenge, HECKTOR 2021, held in conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings. Head and Neck Tumor Segmentation Challenge (2nd : 2021 ...","volume":"112 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77218136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Head and Neck Cancer Primary Tumor Auto Segmentation Using Model Ensembling of Deep Learning in PET/CT Images. 利用深度学习模型在 PET/CT 图像中进行头颈癌原发肿瘤自动分割
Mohamed A Naser, Kareem A Wahid, Lisanne V van Dijk, Renjie He, Moamen Abobakr Abdelaal, Cem Dede, Abdallah S R Mohamed, Clifton D Fuller

Auto-segmentation of primary tumors in oropharyngeal cancer using PET/CT images is an unmet need that has the potential to improve radiation oncology workflows. In this study, we develop a series of deep learning models based on a 3D Residual Unet (ResUnet) architecture that can segment oropharyngeal tumors with high performance as demonstrated through internal and external validation of large-scale datasets (training size = 224 patients, testing size = 101 patients) as part of the 2021 HECKTOR Challenge. Specifically, we leverage ResUNet models with either 256 or 512 bottleneck layer channels that demonstrate internal validation (10-fold cross-validation) mean Dice similarity coefficient (DSC) up to 0.771 and median 95% Hausdorff distance (95% HD) as low as 2.919 mm. We employ label fusion ensemble approaches, including Simultaneous Truth and Performance Level Estimation (STAPLE) and a voxel-level threshold approach based on majority voting (AVERAGE), to generate consensus segmentations on the test data by combining the segmentations produced through different trained cross-validation models. We demonstrate that our best performing ensembling approach (256 channels AVERAGE) achieves a mean DSC of 0.770 and median 95% HD of 3.143 mm through independent external validation on the test set. Our DSC and 95% HD test results are within 0.01 and 0.06 mm of the top ranked model in the competition, respectively. Concordance of internal and external validation results suggests our models are robust and can generalize well to unseen PET/CT data. We advocate that ResUNet models coupled to label fusion ensembling approaches are promising candidates for PET/CT oropharyngeal primary tumors auto-segmentation. Future investigations should target the ideal combination of channel combinations and label fusion strategies to maximize segmentation performance.

利用 PET/CT 图像对口咽癌原发肿瘤进行自动分割是一项尚未满足的需求,有可能改善放射肿瘤学的工作流程。在本研究中,我们开发了一系列基于三维残留 Unet(ResUnet)架构的深度学习模型,通过对大规模数据集(训练规模 = 224 名患者,测试规模 = 101 名患者)的内部和外部验证,这些模型可以对口咽肿瘤进行高性能分割,这也是 2021 年 HECKTOR 挑战赛的一部分。具体来说,我们利用具有 256 或 512 个瓶颈层通道的 ResUNet 模型,这些模型在内部验证(10 倍交叉验证)中的平均 Dice 相似性系数 (DSC) 高达 0.771,95% Hausdorff 距离 (95% HD) 中值低至 2.919 mm。我们采用标签融合集合方法,包括同步真相和性能水平估计(STAPLE)和基于多数投票的体素级阈值方法(AVERAGE),通过结合不同训练有素的交叉验证模型产生的分割结果,在测试数据上生成共识分割结果。通过对测试集进行独立的外部验证,我们证明了性能最好的集合方法(256 通道平均值)的平均 DSC 值为 0.770,中位 95% HD 值为 3.143 mm。我们的 DSC 和 95% HD 测试结果分别与竞赛中排名第一的模型相差 0.01 毫米和 0.06 毫米。内部和外部验证结果的一致性表明,我们的模型是稳健的,可以很好地推广到未见过的 PET/CT 数据中。我们认为,ResUNet 模型与标签融合集合方法相结合,是 PET/CT 口咽原发肿瘤自动分割的理想候选方法。未来的研究应以通道组合和标签融合策略的理想组合为目标,以最大限度地提高分割性能。
{"title":"Head and Neck Cancer Primary Tumor Auto Segmentation Using Model Ensembling of Deep Learning in PET/CT Images.","authors":"Mohamed A Naser, Kareem A Wahid, Lisanne V van Dijk, Renjie He, Moamen Abobakr Abdelaal, Cem Dede, Abdallah S R Mohamed, Clifton D Fuller","doi":"10.1007/978-3-030-98253-9_11","DOIUrl":"https://doi.org/10.1007/978-3-030-98253-9_11","url":null,"abstract":"<p><p>Auto-segmentation of primary tumors in oropharyngeal cancer using PET/CT images is an unmet need that has the potential to improve radiation oncology workflows. In this study, we develop a series of deep learning models based on a 3D Residual Unet (ResUnet) architecture that can segment oropharyngeal tumors with high performance as demonstrated through internal and external validation of large-scale datasets (training size = 224 patients, testing size = 101 patients) as part of the 2021 HECKTOR Challenge. Specifically, we leverage ResUNet models with either 256 or 512 bottleneck layer channels that demonstrate internal validation (10-fold cross-validation) mean Dice similarity coefficient (DSC) up to 0.771 and median 95% Hausdorff distance (95% HD) as low as 2.919 mm. We employ label fusion ensemble approaches, including Simultaneous Truth and Performance Level Estimation (STAPLE) and a voxel-level threshold approach based on majority voting (AVERAGE), to generate consensus segmentations on the test data by combining the segmentations produced through different trained cross-validation models. We demonstrate that our best performing ensembling approach (256 channels AVERAGE) achieves a mean DSC of 0.770 and median 95% HD of 3.143 mm through independent external validation on the test set. Our DSC and 95% HD test results are within 0.01 and 0.06 mm of the top ranked model in the competition, respectively. Concordance of internal and external validation results suggests our models are robust and can generalize well to unseen PET/CT data. We advocate that ResUNet models coupled to label fusion ensembling approaches are promising candidates for PET/CT oropharyngeal primary tumors auto-segmentation. Future investigations should target the ideal combination of channel combinations and label fusion strategies to maximize segmentation performance.</p>","PeriodicalId":93561,"journal":{"name":"Head and neck tumor segmentation and outcome prediction : second challenge, HECKTOR 2021, held in conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings. Head and Neck Tumor Segmentation Challenge (2nd : 2021 ...","volume":"13209 ","pages":"121-132"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8991449/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142086464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining Tumor Segmentation Masks with PET/CT Images and Clinical Data in a Deep Learning Framework for Improved Prognostic Prediction in Head and Neck Squamous Cell Carcinoma. 在深度学习框架中将肿瘤分割掩膜与 PET/CT 图像和临床数据相结合,改进头颈部鳞状细胞癌的预后预测。
Kareem A Wahid, Renjie He, Cem Dede, Abdallah S R Mohamed, Moamen Abobakr Abdelaal, Lisanne V van Dijk, Clifton D Fuller, Mohamed A Naser

PET/CT images provide a rich data source for clinical prediction models in head and neck squamous cell carcinoma (HNSCC). Deep learning models often use images in an end-to-end fashion with clinical data or no additional input for predictions. However, in the context of HNSCC, the tumor region of interest may be an informative prior in the generation of improved prediction performance. In this study, we utilize a deep learning framework based on a DenseNet architecture to combine PET images, CT images, primary tumor segmentation masks, and clinical data as separate channels to predict progression-free survival (PFS) in days for HNSCC patients. Through internal validation (10-fold cross-validation) based on a large set of training data provided by the 2021 HECKTOR Challenge, we achieve a mean C-index of 0.855 ± 0.060 and 0.650 ± 0.074 when observed events are and are not included in the C-index calculation, respectively. Ensemble approaches applied to cross-validation folds yield C-index values up to 0.698 in the independent test set (external validation), leading to a 1st place ranking on the competition leaderboard. Importantly, the value of the added segmentation mask is underscored in both internal and external validation by an improvement of the C-index when compared to models that do not utilize the segmentation mask. These promising results highlight the utility of including segmentation masks as additional input channels in deep learning pipelines for clinical outcome prediction in HNSCC.

PET/CT 图像为头颈部鳞状细胞癌(HNSCC)的临床预测模型提供了丰富的数据源。深度学习模型通常以端到端的方式使用图像,同时使用临床数据或不使用额外输入进行预测。然而,在 HNSCC 的背景下,肿瘤感兴趣区可能是提高预测性能的先验信息。在本研究中,我们利用基于 DenseNet 架构的深度学习框架,将 PET 图像、CT 图像、原发肿瘤分割掩膜和临床数据作为独立通道结合起来,预测 HNSCC 患者的无进展生存期(PFS)。通过基于 2021 年 HECKTOR 挑战赛提供的大量训练数据集的内部验证(10 倍交叉验证),当观察到的事件被纳入或未被纳入 C 指数计算时,我们的平均 C 指数分别为 0.855 ± 0.060 和 0.650 ± 0.074。应用于交叉验证折叠的集合方法在独立测试集(外部验证)中产生的 C 指数值高达 0.698,从而在竞赛排行榜上名列第一。重要的是,在内部和外部验证中,与未使用分段掩码的模型相比,增加的分段掩码提高了 C 指数,从而凸显了增加的分段掩码的价值。这些令人鼓舞的结果凸显了在深度学习管道中将分段掩码作为额外输入通道用于 HNSCC 临床结果预测的实用性。
{"title":"Combining Tumor Segmentation Masks with PET/CT Images and Clinical Data in a Deep Learning Framework for Improved Prognostic Prediction in Head and Neck Squamous Cell Carcinoma.","authors":"Kareem A Wahid, Renjie He, Cem Dede, Abdallah S R Mohamed, Moamen Abobakr Abdelaal, Lisanne V van Dijk, Clifton D Fuller, Mohamed A Naser","doi":"10.1007/978-3-030-98253-9_28","DOIUrl":"https://doi.org/10.1007/978-3-030-98253-9_28","url":null,"abstract":"<p><p>PET/CT images provide a rich data source for clinical prediction models in head and neck squamous cell carcinoma (HNSCC). Deep learning models often use images in an end-to-end fashion with clinical data or no additional input for predictions. However, in the context of HNSCC, the tumor region of interest may be an informative prior in the generation of improved prediction performance. In this study, we utilize a deep learning framework based on a DenseNet architecture to combine PET images, CT images, primary tumor segmentation masks, and clinical data as separate channels to predict progression-free survival (PFS) in days for HNSCC patients. Through internal validation (10-fold cross-validation) based on a large set of training data provided by the 2021 HECKTOR Challenge, we achieve a mean C-index of 0.855 ± 0.060 and 0.650 ± 0.074 when observed events are and are not included in the C-index calculation, respectively. Ensemble approaches applied to cross-validation folds yield C-index values up to 0.698 in the independent test set (external validation), leading to a 1<sup>st</sup> place ranking on the competition leaderboard. Importantly, the value of the added segmentation mask is underscored in both internal and external validation by an improvement of the C-index when compared to models that do not utilize the segmentation mask. These promising results highlight the utility of including segmentation masks as additional input channels in deep learning pipelines for clinical outcome prediction in HNSCC.</p>","PeriodicalId":93561,"journal":{"name":"Head and neck tumor segmentation and outcome prediction : second challenge, HECKTOR 2021, held in conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings. Head and Neck Tumor Segmentation Challenge (2nd : 2021 ...","volume":"13209 ","pages":"300-307"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8991448/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142086463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Head and Neck Cancer Primary Tumor Auto Segmentation using Model Ensembling of Deep Learning in PET-CT Images 基于深度学习模型集成的PET-CT图像头颈部肿瘤自动分割
M. Naser, K. Wahid, L. V. Dijk, R. He, M. A. Abdelaal, C. Dede, A. Mohamed, C. Fuller
Auto-segmentation of primary tumors in oropharyngeal cancer using PET/CT images is an unmet need that has the potential to improve radiation oncology workflows. In this study, we develop a series of deep learning models based on a 3D Residual Unet (ResUnet) architecture that can segment oropharyngeal tumors with high performance as demonstrated through internal and external validation of large-scale datasets (training size = 224 patients, testing size = 101 patients) as part of the 2021 HECKTOR Challenge. Specifically, we leverage ResUNet models with either 256 or 512 bottleneck layer channels that are able to demonstrate internal validation (10-fold cross-validation) mean Dice similarity coefficient (DSC) up to 0.771 and median 95% Hausdorff distance (95% HD) as low as 2.919 mm. We employ label fusion ensemble approaches, including Simultaneous Truth and Performance Level Estimation (STAPLE) and a voxel-level threshold approach based on majority voting (AVERAGE), to generate consensus segmentations on the test data by combining the segmentations produced through different trained cross-validation models. We demonstrate that our best performing ensembling approach (256 channels AVERAGE) achieves a mean DSC of 0.770 and median 95% HD of 3.143 mm through independent external validation on the test set. Concordance of internal and external validation results suggests our models are robust and can generalize well to unseen PET/CT data. We advocate that ResUNet models coupled to label fusion ensembling approaches are promising candidates for PET/CT oropharyngeal primary tumors auto-segmentation, with future investigations targeting the ideal combination of channel combinations and label fusion strategies to maximize seg-mentation performance.
使用PET/CT图像对口咽癌原发肿瘤进行自动分割是一个尚未满足的需求,有可能改善放射肿瘤学工作流程。在这项研究中,我们开发了一系列基于3D残余Unet (ResUnet)架构的深度学习模型,可以通过大规模数据集(训练规模= 224例患者,测试规模= 101例患者)的内部和外部验证,以高性能分割口咽肿瘤,作为2021年HECKTOR挑战的一部分。具体来说,我们利用具有256或512个瓶颈层通道的ResUNet模型,能够演示内部验证(10倍交叉验证),平均骰子相似系数(DSC)高达0.771,中位数95%豪斯多夫距离(95% HD)低至2.919 mm。我们采用标签融合集成方法,包括同步真相和性能水平估计(STAPLE)和基于多数投票(AVERAGE)的体素级阈值方法,通过组合由不同训练交叉验证模型产生的分割,在测试数据上生成共识分割。通过对测试集的独立外部验证,我们证明了我们表现最好的集成方法(256通道AVERAGE)的平均DSC为0.770,中位数95% HD为3.143 mm。内部和外部验证结果的一致性表明我们的模型是稳健的,可以很好地推广到未见过的PET/CT数据。我们认为,ResUNet模型与标签融合集成方法相结合是PET/CT口咽原发肿瘤自动分割的理想选择,未来的研究将针对通道组合和标签融合策略的理想组合,以最大限度地提高分割性能。
{"title":"Head and Neck Cancer Primary Tumor Auto Segmentation using Model Ensembling of Deep Learning in PET-CT Images","authors":"M. Naser, K. Wahid, L. V. Dijk, R. He, M. A. Abdelaal, C. Dede, A. Mohamed, C. Fuller","doi":"10.1101/2021.10.14.21264953","DOIUrl":"https://doi.org/10.1101/2021.10.14.21264953","url":null,"abstract":"Auto-segmentation of primary tumors in oropharyngeal cancer using PET/CT images is an unmet need that has the potential to improve radiation oncology workflows. In this study, we develop a series of deep learning models based on a 3D Residual Unet (ResUnet) architecture that can segment oropharyngeal tumors with high performance as demonstrated through internal and external validation of large-scale datasets (training size = 224 patients, testing size = 101 patients) as part of the 2021 HECKTOR Challenge. Specifically, we leverage ResUNet models with either 256 or 512 bottleneck layer channels that are able to demonstrate internal validation (10-fold cross-validation) mean Dice similarity coefficient (DSC) up to 0.771 and median 95% Hausdorff distance (95% HD) as low as 2.919 mm. We employ label fusion ensemble approaches, including Simultaneous Truth and Performance Level Estimation (STAPLE) and a voxel-level threshold approach based on majority voting (AVERAGE), to generate consensus segmentations on the test data by combining the segmentations produced through different trained cross-validation models. We demonstrate that our best performing ensembling approach (256 channels AVERAGE) achieves a mean DSC of 0.770 and median 95% HD of 3.143 mm through independent external validation on the test set. Concordance of internal and external validation results suggests our models are robust and can generalize well to unseen PET/CT data. We advocate that ResUNet models coupled to label fusion ensembling approaches are promising candidates for PET/CT oropharyngeal primary tumors auto-segmentation, with future investigations targeting the ideal combination of channel combinations and label fusion strategies to maximize seg-mentation performance.","PeriodicalId":93561,"journal":{"name":"Head and neck tumor segmentation and outcome prediction : second challenge, HECKTOR 2021, held in conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings. Head and Neck Tumor Segmentation Challenge (2nd : 2021 ...","volume":"117 1","pages":"121-132"},"PeriodicalIF":0.0,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77144792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Combining Tumor Segmentation Masks with PET/CT Images and Clinical Data in a Deep Learning Framework for Improved Prognostic Prediction in Head and Neck Squamous Cell Carcinoma 结合肿瘤分割面具与PET/CT图像和临床数据在深度学习框架中改进头颈部鳞状细胞癌的预后预测
K. Wahid, R. He, C. Dede, A. Mohamed, M. A. Abdelaal, L. V. Dijk, C. Fuller, M. Naser
PET/CT images provide a rich data source for clinical prediction models in head and neck squamous cell carcinoma (HNSCC). Deep learning models often use images in an end-to-end fashion with clinical data or no additional input for predictions. However, in the context of HNSCC, the tumor region of interest may be an informative prior in the generation of improved prediction performance. In this study, we utilize a deep learning framework based on a DenseNet architecture to combine PET images, CT images, primary tumor segmentation masks, and clinical data as separate channels to predict progression-free survival (PFS) in days for HNSCC patients. Through internal validation (10-fold cross-validation) based on a large set of training data provided by the 2021 HECKTOR Challenge, we achieve a mean C-index of 0.855 +- 0.060 and 0.650 +- 0.074 when observed events are and are not included in the C-index calculation, respectively. Ensemble approaches applied to cross-validation folds yield C-index values up to 0.698 in the independent test set (external validation). Importantly, the value of the added segmentation mask is underscored in both internal and external validation by an improvement of the C-index when compared to models that do not utilize the segmentation mask. These promising results highlight the utility of including segmentation masks as additional input channels in deep learning pipelines for clinical outcome prediction in HNSCC.
PET/CT图像为头颈部鳞状细胞癌(HNSCC)的临床预测模型提供了丰富的数据来源。深度学习模型通常以端到端的方式使用图像和临床数据,或者不需要额外的预测输入。然而,在HNSCC的背景下,感兴趣的肿瘤区域可能是产生改进预测性能的信息先验。在这项研究中,我们利用基于DenseNet架构的深度学习框架,将PET图像、CT图像、原发肿瘤分割掩码和临床数据作为单独的通道来预测HNSCC患者的无进展生存期(PFS)。通过基于2021年HECKTOR挑战赛提供的大量训练数据的内部验证(10倍交叉验证),我们获得了在C-index计算中包括观测事件和不包括观测事件时的平均C-index分别为0.855 +- 0.060和0.650 +- 0.074。将集成方法应用于交叉验证折叠,在独立测试集(外部验证)中c指数值高达0.698。重要的是,与不使用分割掩码的模型相比,添加的分割掩码的价值在内部和外部验证中都通过改进c指数来强调。这些有希望的结果强调了将分割掩码作为深度学习管道中用于HNSCC临床结果预测的额外输入通道的实用性。
{"title":"Combining Tumor Segmentation Masks with PET/CT Images and Clinical Data in a Deep Learning Framework for Improved Prognostic Prediction in Head and Neck Squamous Cell Carcinoma","authors":"K. Wahid, R. He, C. Dede, A. Mohamed, M. A. Abdelaal, L. V. Dijk, C. Fuller, M. Naser","doi":"10.1101/2021.10.14.21264958","DOIUrl":"https://doi.org/10.1101/2021.10.14.21264958","url":null,"abstract":"PET/CT images provide a rich data source for clinical prediction models in head and neck squamous cell carcinoma (HNSCC). Deep learning models often use images in an end-to-end fashion with clinical data or no additional input for predictions. However, in the context of HNSCC, the tumor region of interest may be an informative prior in the generation of improved prediction performance. In this study, we utilize a deep learning framework based on a DenseNet architecture to combine PET images, CT images, primary tumor segmentation masks, and clinical data as separate channels to predict progression-free survival (PFS) in days for HNSCC patients. Through internal validation (10-fold cross-validation) based on a large set of training data provided by the 2021 HECKTOR Challenge, we achieve a mean C-index of 0.855 +- 0.060 and 0.650 +- 0.074 when observed events are and are not included in the C-index calculation, respectively. Ensemble approaches applied to cross-validation folds yield C-index values up to 0.698 in the independent test set (external validation). Importantly, the value of the added segmentation mask is underscored in both internal and external validation by an improvement of the C-index when compared to models that do not utilize the segmentation mask. These promising results highlight the utility of including segmentation masks as additional input channels in deep learning pipelines for clinical outcome prediction in HNSCC.","PeriodicalId":93561,"journal":{"name":"Head and neck tumor segmentation and outcome prediction : second challenge, HECKTOR 2021, held in conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings. Head and Neck Tumor Segmentation Challenge (2nd : 2021 ...","volume":"84 1","pages":"300-307"},"PeriodicalIF":0.0,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80460043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Progression Free Survival Prediction for Head and Neck Cancer using Deep Learning based on Clinical and PET-CT Imaging Data 基于临床和PET-CT成像数据的深度学习头颈癌无进展生存预测
M. Naser, K. Wahid, A. Mohamed, M. A. Abdelaal, R. He, C. Dede, L. V. Dijk, C. Fuller
Determining progression-free survival (PFS) for head and neck squamous cell carcinoma (HNSCC) patients is a challenging but pertinent task that could help stratify patients for improved overall outcomes. PET/CT images provide a rich source of anatomical and metabolic data for potential clinical biomarkers that would inform treatment decisions and could help improve PFS. In this study, we participate in the 2021 HECKTOR Challenge to predict PFS in a large dataset of HNSCC PET/CT images using deep learning approaches. We develop a series of deep learning models based on the DenseNet architecture using a negative log-likelihood loss function that utilizes PET/CT images and clinical data as separate input channels to predict PFS in days. Internal model validation based on 10-fold cross-validation using the training data (N=224) yielded C-index values up to 0.622 (without) and 0.842 (with) censoring status considered in C-index computation, respectively. We then implemented model ensembling approaches based on the training data cross-validation folds to predict the PFS of the test set patients (N=101). External validation on the test set for the best ensembling method yielded a C-index value of 0.694. Our results are a promising example of how deep learning approaches can effectively utilize imaging and clinical data for medical outcome prediction in HNSCC, but further work in optimizing these processes is needed.
确定头颈部鳞状细胞癌(HNSCC)患者的无进展生存期(PFS)是一项具有挑战性但相关的任务,可以帮助患者分层以改善总体预后。PET/CT图像为潜在的临床生物标志物提供了丰富的解剖学和代谢数据来源,这些数据将为治疗决策提供信息,并有助于改善PFS。在这项研究中,我们参加了2021年HECKTOR挑战赛,使用深度学习方法预测HNSCC PET/CT图像的大型数据集中的PFS。我们开发了一系列基于DenseNet架构的深度学习模型,使用负对数似然损失函数,利用PET/CT图像和临床数据作为单独的输入通道来预测PFS。使用训练数据(N=224)进行10倍交叉验证的内部模型验证,C-index计算中考虑审查状态时的C-index值分别高达0.622(未考虑)和0.842(有考虑)。然后,我们实施了基于训练数据交叉验证折叠的模型集成方法来预测测试集患者的PFS (N=101)。对最佳组合方法的测试集进行外部验证,其C-index值为0.694。我们的研究结果是一个很有希望的例子,说明深度学习方法如何有效地利用成像和临床数据来预测HNSCC的医疗结果,但需要进一步优化这些过程。
{"title":"Progression Free Survival Prediction for Head and Neck Cancer using Deep Learning based on Clinical and PET-CT Imaging Data","authors":"M. Naser, K. Wahid, A. Mohamed, M. A. Abdelaal, R. He, C. Dede, L. V. Dijk, C. Fuller","doi":"10.1101/2021.10.14.21264955","DOIUrl":"https://doi.org/10.1101/2021.10.14.21264955","url":null,"abstract":"Determining progression-free survival (PFS) for head and neck squamous cell carcinoma (HNSCC) patients is a challenging but pertinent task that could help stratify patients for improved overall outcomes. PET/CT images provide a rich source of anatomical and metabolic data for potential clinical biomarkers that would inform treatment decisions and could help improve PFS. In this study, we participate in the 2021 HECKTOR Challenge to predict PFS in a large dataset of HNSCC PET/CT images using deep learning approaches. We develop a series of deep learning models based on the DenseNet architecture using a negative log-likelihood loss function that utilizes PET/CT images and clinical data as separate input channels to predict PFS in days. Internal model validation based on 10-fold cross-validation using the training data (N=224) yielded C-index values up to 0.622 (without) and 0.842 (with) censoring status considered in C-index computation, respectively. We then implemented model ensembling approaches based on the training data cross-validation folds to predict the PFS of the test set patients (N=101). External validation on the test set for the best ensembling method yielded a C-index value of 0.694. Our results are a promising example of how deep learning approaches can effectively utilize imaging and clinical data for medical outcome prediction in HNSCC, but further work in optimizing these processes is needed.","PeriodicalId":93561,"journal":{"name":"Head and neck tumor segmentation and outcome prediction : second challenge, HECKTOR 2021, held in conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings. Head and Neck Tumor Segmentation Challenge (2nd : 2021 ...","volume":"1 1","pages":"287-299"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77815003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Head and Neck Tumor Segmentation: First Challenge, HECKTOR 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Proceedings 头颈部肿瘤分割:第一个挑战,HECKTOR 2020,与MICCAI 2020一起举行,秘鲁利马,2020年10月4日,会议录
{"title":"Head and Neck Tumor Segmentation: First Challenge, HECKTOR 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Proceedings","authors":"","doi":"10.1007/978-3-030-67194-5","DOIUrl":"https://doi.org/10.1007/978-3-030-67194-5","url":null,"abstract":"","PeriodicalId":93561,"journal":{"name":"Head and neck tumor segmentation and outcome prediction : second challenge, HECKTOR 2021, held in conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings. Head and Neck Tumor Segmentation Challenge (2nd : 2021 ...","volume":"96 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76021487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
Head and neck tumor segmentation and outcome prediction : second challenge, HECKTOR 2021, held in conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings. Head and Neck Tumor Segmentation Challenge (2nd : 2021 ...
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1