Autopet III 挑战赛:将解剖学知识纳入 nnUNet,在 PET/CT 中进行病灶分割

Hamza Kalisch, Fabian Hörst, Ken Herrmann, Jens Kleesiek, Constantin Seibold
{"title":"Autopet III 挑战赛:将解剖学知识纳入 nnUNet,在 PET/CT 中进行病灶分割","authors":"Hamza Kalisch, Fabian Hörst, Ken Herrmann, Jens Kleesiek, Constantin Seibold","doi":"arxiv-2409.12155","DOIUrl":null,"url":null,"abstract":"Lesion segmentation in PET/CT imaging is essential for precise tumor\ncharacterization, which supports personalized treatment planning and enhances\ndiagnostic precision in oncology. However, accurate manual segmentation of\nlesions is time-consuming and prone to inter-observer variability. Given the\nrising demand and clinical use of PET/CT, automated segmentation methods,\nparticularly deep-learning-based approaches, have become increasingly more\nrelevant. The autoPET III Challenge focuses on advancing automated segmentation\nof tumor lesions in PET/CT images in a multitracer multicenter setting,\naddressing the clinical need for quantitative, robust, and generalizable\nsolutions. Building on previous challenges, the third iteration of the autoPET\nchallenge introduces a more diverse dataset featuring two different tracers\n(FDG and PSMA) from two clinical centers. To this extent, we developed a\nclassifier that identifies the tracer of the given PET/CT based on the Maximum\nIntensity Projection of the PET scan. We trained two individual\nnnUNet-ensembles for each tracer where anatomical labels are included as a\nmulti-label task to enhance the model's performance. Our final submission\nachieves cross-validation Dice scores of 76.90% and 61.33% for the publicly\navailable FDG and PSMA datasets, respectively. The code is available at\nhttps://github.com/hakal104/autoPETIII/ .","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Autopet III challenge: Incorporating anatomical knowledge into nnUNet for lesion segmentation in PET/CT\",\"authors\":\"Hamza Kalisch, Fabian Hörst, Ken Herrmann, Jens Kleesiek, Constantin Seibold\",\"doi\":\"arxiv-2409.12155\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Lesion segmentation in PET/CT imaging is essential for precise tumor\\ncharacterization, which supports personalized treatment planning and enhances\\ndiagnostic precision in oncology. However, accurate manual segmentation of\\nlesions is time-consuming and prone to inter-observer variability. Given the\\nrising demand and clinical use of PET/CT, automated segmentation methods,\\nparticularly deep-learning-based approaches, have become increasingly more\\nrelevant. The autoPET III Challenge focuses on advancing automated segmentation\\nof tumor lesions in PET/CT images in a multitracer multicenter setting,\\naddressing the clinical need for quantitative, robust, and generalizable\\nsolutions. Building on previous challenges, the third iteration of the autoPET\\nchallenge introduces a more diverse dataset featuring two different tracers\\n(FDG and PSMA) from two clinical centers. To this extent, we developed a\\nclassifier that identifies the tracer of the given PET/CT based on the Maximum\\nIntensity Projection of the PET scan. We trained two individual\\nnnUNet-ensembles for each tracer where anatomical labels are included as a\\nmulti-label task to enhance the model's performance. Our final submission\\nachieves cross-validation Dice scores of 76.90% and 61.33% for the publicly\\navailable FDG and PSMA datasets, respectively. The code is available at\\nhttps://github.com/hakal104/autoPETIII/ .\",\"PeriodicalId\":501289,\"journal\":{\"name\":\"arXiv - EE - Image and Video Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Image and Video Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.12155\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.12155","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

PET/CT 成像中的病灶分割对精确的肿瘤定性至关重要,可支持个性化治疗计划并提高肿瘤诊断的精确性。然而,精确的手动病灶分割既费时又容易造成观察者之间的差异。鉴于 PET/CT 的需求和临床应用日益增长,自动分割方法,尤其是基于深度学习的方法,变得越来越重要。autoPET III 挑战赛的重点是在多示踪剂多中心环境中推进 PET/CT 图像中肿瘤病灶的自动分割,满足临床对定量、稳健和通用解决方案的需求。在前几届挑战赛的基础上,第三届 autoPETchallenge 引入了更多样化的数据集,包括来自两个临床中心的两种不同示踪剂(FDG 和 PSMA)。为此,我们开发了一种分类器,可根据 PET 扫描的最大强度投影来识别给定 PET/CT 的示踪剂。我们为每种示踪剂训练了两个独立的 nnUNet 集合,其中解剖学标签被列为多标签任务,以提高模型的性能。对于公开的 FDG 和 PSMA 数据集,我们最终提交的交叉验证 Dice 分数分别为 76.90% 和 61.33%。代码可在https://github.com/hakal104/autoPETIII/。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Autopet III challenge: Incorporating anatomical knowledge into nnUNet for lesion segmentation in PET/CT
Lesion segmentation in PET/CT imaging is essential for precise tumor characterization, which supports personalized treatment planning and enhances diagnostic precision in oncology. However, accurate manual segmentation of lesions is time-consuming and prone to inter-observer variability. Given the rising demand and clinical use of PET/CT, automated segmentation methods, particularly deep-learning-based approaches, have become increasingly more relevant. The autoPET III Challenge focuses on advancing automated segmentation of tumor lesions in PET/CT images in a multitracer multicenter setting, addressing the clinical need for quantitative, robust, and generalizable solutions. Building on previous challenges, the third iteration of the autoPET challenge introduces a more diverse dataset featuring two different tracers (FDG and PSMA) from two clinical centers. To this extent, we developed a classifier that identifies the tracer of the given PET/CT based on the Maximum Intensity Projection of the PET scan. We trained two individual nnUNet-ensembles for each tracer where anatomical labels are included as a multi-label task to enhance the model's performance. Our final submission achieves cross-validation Dice scores of 76.90% and 61.33% for the publicly available FDG and PSMA datasets, respectively. The code is available at https://github.com/hakal104/autoPETIII/ .
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
multiPI-TransBTS: A Multi-Path Learning Framework for Brain Tumor Image Segmentation Based on Multi-Physical Information Autopet III challenge: Incorporating anatomical knowledge into nnUNet for lesion segmentation in PET/CT Denoising diffusion models for high-resolution microscopy image restoration Tumor aware recurrent inter-patient deformable image registration of computed tomography scans with lung cancer Cross-Organ and Cross-Scanner Adenocarcinoma Segmentation using Rein to Fine-tune Vision Foundation Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1