从 FDG 到 PSMA:PET/CT 成像中多示踪剂、多中心病灶分割的搭便车指南

Maximilian Rokuss, Balint Kovacs, Yannick Kirchhoff, Shuhan Xiao, Constantin Ulrich, Klaus H. Maier-Hein, Fabian Isensee
{"title":"从 FDG 到 PSMA:PET/CT 成像中多示踪剂、多中心病灶分割的搭便车指南","authors":"Maximilian Rokuss, Balint Kovacs, Yannick Kirchhoff, Shuhan Xiao, Constantin Ulrich, Klaus H. Maier-Hein, Fabian Isensee","doi":"arxiv-2409.09478","DOIUrl":null,"url":null,"abstract":"Automated lesion segmentation in PET/CT scans is crucial for improving\nclinical workflows and advancing cancer diagnostics. However, the task is\nchallenging due to physiological variability, different tracers used in PET\nimaging, and diverse imaging protocols across medical centers. To address this,\nthe autoPET series was created to challenge researchers to develop algorithms\nthat generalize across diverse PET/CT environments. This paper presents our\nsolution for the autoPET III challenge, targeting multitracer, multicenter\ngeneralization using the nnU-Net framework with the ResEncL architecture. Key\ntechniques include misalignment data augmentation and multi-modal pretraining\nacross CT, MR, and PET datasets to provide an initial anatomical understanding.\nWe incorporate organ supervision as a multitask approach, enabling the model to\ndistinguish between physiological uptake and tracer-specific patterns, which is\nparticularly beneficial in cases where no lesions are present. Compared to the\ndefault nnU-Net, which achieved a Dice score of 57.61, or the larger ResEncL\n(65.31) our model significantly improved performance with a Dice score of\n68.40, alongside a reduction in false positive (FPvol: 7.82) and false negative\n(FNvol: 10.35) volumes. These results underscore the effectiveness of combining\nadvanced network design, augmentation, pretraining, and multitask learning for\nPET/CT lesion segmentation. Code is publicly available at\nhttps://github.com/MIC-DKFZ/autopet-3-submission.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"From FDG to PSMA: A Hitchhiker's Guide to Multitracer, Multicenter Lesion Segmentation in PET/CT Imaging\",\"authors\":\"Maximilian Rokuss, Balint Kovacs, Yannick Kirchhoff, Shuhan Xiao, Constantin Ulrich, Klaus H. Maier-Hein, Fabian Isensee\",\"doi\":\"arxiv-2409.09478\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automated lesion segmentation in PET/CT scans is crucial for improving\\nclinical workflows and advancing cancer diagnostics. However, the task is\\nchallenging due to physiological variability, different tracers used in PET\\nimaging, and diverse imaging protocols across medical centers. To address this,\\nthe autoPET series was created to challenge researchers to develop algorithms\\nthat generalize across diverse PET/CT environments. This paper presents our\\nsolution for the autoPET III challenge, targeting multitracer, multicenter\\ngeneralization using the nnU-Net framework with the ResEncL architecture. Key\\ntechniques include misalignment data augmentation and multi-modal pretraining\\nacross CT, MR, and PET datasets to provide an initial anatomical understanding.\\nWe incorporate organ supervision as a multitask approach, enabling the model to\\ndistinguish between physiological uptake and tracer-specific patterns, which is\\nparticularly beneficial in cases where no lesions are present. Compared to the\\ndefault nnU-Net, which achieved a Dice score of 57.61, or the larger ResEncL\\n(65.31) our model significantly improved performance with a Dice score of\\n68.40, alongside a reduction in false positive (FPvol: 7.82) and false negative\\n(FNvol: 10.35) volumes. These results underscore the effectiveness of combining\\nadvanced network design, augmentation, pretraining, and multitask learning for\\nPET/CT lesion segmentation. Code is publicly available at\\nhttps://github.com/MIC-DKFZ/autopet-3-submission.\",\"PeriodicalId\":501289,\"journal\":{\"name\":\"arXiv - EE - Image and Video Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Image and Video Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.09478\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09478","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

PET/CT 扫描中的自动病灶分割对于改善临床工作流程和推进癌症诊断至关重要。然而,由于生理变化、PET成像中使用的不同示踪剂以及各医疗中心成像方案的不同,这项任务具有挑战性。为了解决这个问题,我们创建了 autoPET 系列,以挑战研究人员开发能在不同 PET/CT 环境中通用的算法。本文介绍了我们针对 autoPET III 挑战的解决方案,目标是使用带有 ResEncL 架构的 nnU-Net 框架实现多示踪器、多中心泛化。关键技术包括错位数据增强和跨 CT、MR 和 PET 数据集的多模态预训练,以提供初步的解剖学理解。我们将器官监督作为一种多任务方法,使模型能够区分生理摄取和示踪剂特异性模式,这在没有病变的情况下尤其有益。与 Dice 得分为 57.61 的默认 nnU-Net 或更大的 ResEncL(65.31)相比,我们的模型显著提高了性能,Dice 得分为 68.40,同时减少了假阳性(FPvol: 7.82)和假阴性(FNvol: 10.35)体积。这些结果凸显了将先进的网络设计、增强、预训练和多任务学习相结合用于PET/CT病灶分割的有效性。代码公开于https://github.com/MIC-DKFZ/autopet-3-submission。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
From FDG to PSMA: A Hitchhiker's Guide to Multitracer, Multicenter Lesion Segmentation in PET/CT Imaging
Automated lesion segmentation in PET/CT scans is crucial for improving clinical workflows and advancing cancer diagnostics. However, the task is challenging due to physiological variability, different tracers used in PET imaging, and diverse imaging protocols across medical centers. To address this, the autoPET series was created to challenge researchers to develop algorithms that generalize across diverse PET/CT environments. This paper presents our solution for the autoPET III challenge, targeting multitracer, multicenter generalization using the nnU-Net framework with the ResEncL architecture. Key techniques include misalignment data augmentation and multi-modal pretraining across CT, MR, and PET datasets to provide an initial anatomical understanding. We incorporate organ supervision as a multitask approach, enabling the model to distinguish between physiological uptake and tracer-specific patterns, which is particularly beneficial in cases where no lesions are present. Compared to the default nnU-Net, which achieved a Dice score of 57.61, or the larger ResEncL (65.31) our model significantly improved performance with a Dice score of 68.40, alongside a reduction in false positive (FPvol: 7.82) and false negative (FNvol: 10.35) volumes. These results underscore the effectiveness of combining advanced network design, augmentation, pretraining, and multitask learning for PET/CT lesion segmentation. Code is publicly available at https://github.com/MIC-DKFZ/autopet-3-submission.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
multiPI-TransBTS: A Multi-Path Learning Framework for Brain Tumor Image Segmentation Based on Multi-Physical Information Autopet III challenge: Incorporating anatomical knowledge into nnUNet for lesion segmentation in PET/CT Denoising diffusion models for high-resolution microscopy image restoration Tumor aware recurrent inter-patient deformable image registration of computed tomography scans with lung cancer Cross-Organ and Cross-Scanner Adenocarcinoma Segmentation using Rein to Fine-tune Vision Foundation Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1