Maximilian Rokuss, Balint Kovacs, Yannick Kirchhoff, Shuhan Xiao, Constantin Ulrich, Klaus H. Maier-Hein, Fabian Isensee
{"title":"从 FDG 到 PSMA:PET/CT 成像中多示踪剂、多中心病灶分割的搭便车指南","authors":"Maximilian Rokuss, Balint Kovacs, Yannick Kirchhoff, Shuhan Xiao, Constantin Ulrich, Klaus H. Maier-Hein, Fabian Isensee","doi":"arxiv-2409.09478","DOIUrl":null,"url":null,"abstract":"Automated lesion segmentation in PET/CT scans is crucial for improving\nclinical workflows and advancing cancer diagnostics. However, the task is\nchallenging due to physiological variability, different tracers used in PET\nimaging, and diverse imaging protocols across medical centers. To address this,\nthe autoPET series was created to challenge researchers to develop algorithms\nthat generalize across diverse PET/CT environments. This paper presents our\nsolution for the autoPET III challenge, targeting multitracer, multicenter\ngeneralization using the nnU-Net framework with the ResEncL architecture. Key\ntechniques include misalignment data augmentation and multi-modal pretraining\nacross CT, MR, and PET datasets to provide an initial anatomical understanding.\nWe incorporate organ supervision as a multitask approach, enabling the model to\ndistinguish between physiological uptake and tracer-specific patterns, which is\nparticularly beneficial in cases where no lesions are present. Compared to the\ndefault nnU-Net, which achieved a Dice score of 57.61, or the larger ResEncL\n(65.31) our model significantly improved performance with a Dice score of\n68.40, alongside a reduction in false positive (FPvol: 7.82) and false negative\n(FNvol: 10.35) volumes. These results underscore the effectiveness of combining\nadvanced network design, augmentation, pretraining, and multitask learning for\nPET/CT lesion segmentation. Code is publicly available at\nhttps://github.com/MIC-DKFZ/autopet-3-submission.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"From FDG to PSMA: A Hitchhiker's Guide to Multitracer, Multicenter Lesion Segmentation in PET/CT Imaging\",\"authors\":\"Maximilian Rokuss, Balint Kovacs, Yannick Kirchhoff, Shuhan Xiao, Constantin Ulrich, Klaus H. Maier-Hein, Fabian Isensee\",\"doi\":\"arxiv-2409.09478\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automated lesion segmentation in PET/CT scans is crucial for improving\\nclinical workflows and advancing cancer diagnostics. However, the task is\\nchallenging due to physiological variability, different tracers used in PET\\nimaging, and diverse imaging protocols across medical centers. To address this,\\nthe autoPET series was created to challenge researchers to develop algorithms\\nthat generalize across diverse PET/CT environments. This paper presents our\\nsolution for the autoPET III challenge, targeting multitracer, multicenter\\ngeneralization using the nnU-Net framework with the ResEncL architecture. Key\\ntechniques include misalignment data augmentation and multi-modal pretraining\\nacross CT, MR, and PET datasets to provide an initial anatomical understanding.\\nWe incorporate organ supervision as a multitask approach, enabling the model to\\ndistinguish between physiological uptake and tracer-specific patterns, which is\\nparticularly beneficial in cases where no lesions are present. Compared to the\\ndefault nnU-Net, which achieved a Dice score of 57.61, or the larger ResEncL\\n(65.31) our model significantly improved performance with a Dice score of\\n68.40, alongside a reduction in false positive (FPvol: 7.82) and false negative\\n(FNvol: 10.35) volumes. These results underscore the effectiveness of combining\\nadvanced network design, augmentation, pretraining, and multitask learning for\\nPET/CT lesion segmentation. Code is publicly available at\\nhttps://github.com/MIC-DKFZ/autopet-3-submission.\",\"PeriodicalId\":501289,\"journal\":{\"name\":\"arXiv - EE - Image and Video Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Image and Video Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.09478\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09478","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
From FDG to PSMA: A Hitchhiker's Guide to Multitracer, Multicenter Lesion Segmentation in PET/CT Imaging
Automated lesion segmentation in PET/CT scans is crucial for improving
clinical workflows and advancing cancer diagnostics. However, the task is
challenging due to physiological variability, different tracers used in PET
imaging, and diverse imaging protocols across medical centers. To address this,
the autoPET series was created to challenge researchers to develop algorithms
that generalize across diverse PET/CT environments. This paper presents our
solution for the autoPET III challenge, targeting multitracer, multicenter
generalization using the nnU-Net framework with the ResEncL architecture. Key
techniques include misalignment data augmentation and multi-modal pretraining
across CT, MR, and PET datasets to provide an initial anatomical understanding.
We incorporate organ supervision as a multitask approach, enabling the model to
distinguish between physiological uptake and tracer-specific patterns, which is
particularly beneficial in cases where no lesions are present. Compared to the
default nnU-Net, which achieved a Dice score of 57.61, or the larger ResEncL
(65.31) our model significantly improved performance with a Dice score of
68.40, alongside a reduction in false positive (FPvol: 7.82) and false negative
(FNvol: 10.35) volumes. These results underscore the effectiveness of combining
advanced network design, augmentation, pretraining, and multitask learning for
PET/CT lesion segmentation. Code is publicly available at
https://github.com/MIC-DKFZ/autopet-3-submission.