Balint Kovacs, Shuhan Xiao, Maximilian Rokuss, Constantin Ulrich, Fabian Isensee, Klaus H. Maier-Hein
{"title":"克服 PET/CT 异质性的数据中心策略:从 AutoPET III 病灶划分挑战中获得的启示","authors":"Balint Kovacs, Shuhan Xiao, Maximilian Rokuss, Constantin Ulrich, Fabian Isensee, Klaus H. Maier-Hein","doi":"arxiv-2409.10120","DOIUrl":null,"url":null,"abstract":"The third autoPET challenge introduced a new data-centric task this year,\nshifting the focus from model development to improving metastatic lesion\nsegmentation on PET/CT images through data quality and handling strategies. In\nresponse, we developed targeted methods to enhance segmentation performance\ntailored to the characteristics of PET/CT imaging. Our approach encompasses two\nkey elements. First, to address potential alignment errors between CT and PET\nmodalities as well as the prevalence of punctate lesions, we modified the\nbaseline data augmentation scheme and extended it with misalignment\naugmentation. This adaptation aims to improve segmentation accuracy,\nparticularly for tiny metastatic lesions. Second, to tackle the variability in\nimage dimensions significantly affecting the prediction time, we implemented a\ndynamic ensembling and test-time augmentation (TTA) strategy. This method\noptimizes the use of ensembling and TTA within a 5-minute prediction time\nlimit, effectively leveraging the generalization potential for both small and\nlarge images. Both of our solutions are designed to be robust across different\ntracers and institutional settings, offering a general, yet imaging-specific\napproach to the multi-tracer and multi-institutional challenges of the\ncompetition. We made the challenge repository with our modifications publicly\navailable at \\url{https://github.com/MIC-DKFZ/miccai2024_autopet3_datacentric}.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"189 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Data-Centric Strategies for Overcoming PET/CT Heterogeneity: Insights from the AutoPET III Lesion Segmentation Challenge\",\"authors\":\"Balint Kovacs, Shuhan Xiao, Maximilian Rokuss, Constantin Ulrich, Fabian Isensee, Klaus H. Maier-Hein\",\"doi\":\"arxiv-2409.10120\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The third autoPET challenge introduced a new data-centric task this year,\\nshifting the focus from model development to improving metastatic lesion\\nsegmentation on PET/CT images through data quality and handling strategies. In\\nresponse, we developed targeted methods to enhance segmentation performance\\ntailored to the characteristics of PET/CT imaging. Our approach encompasses two\\nkey elements. First, to address potential alignment errors between CT and PET\\nmodalities as well as the prevalence of punctate lesions, we modified the\\nbaseline data augmentation scheme and extended it with misalignment\\naugmentation. This adaptation aims to improve segmentation accuracy,\\nparticularly for tiny metastatic lesions. Second, to tackle the variability in\\nimage dimensions significantly affecting the prediction time, we implemented a\\ndynamic ensembling and test-time augmentation (TTA) strategy. This method\\noptimizes the use of ensembling and TTA within a 5-minute prediction time\\nlimit, effectively leveraging the generalization potential for both small and\\nlarge images. Both of our solutions are designed to be robust across different\\ntracers and institutional settings, offering a general, yet imaging-specific\\napproach to the multi-tracer and multi-institutional challenges of the\\ncompetition. We made the challenge repository with our modifications publicly\\navailable at \\\\url{https://github.com/MIC-DKFZ/miccai2024_autopet3_datacentric}.\",\"PeriodicalId\":501289,\"journal\":{\"name\":\"arXiv - EE - Image and Video Processing\",\"volume\":\"189 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Image and Video Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.10120\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10120","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Data-Centric Strategies for Overcoming PET/CT Heterogeneity: Insights from the AutoPET III Lesion Segmentation Challenge
The third autoPET challenge introduced a new data-centric task this year,
shifting the focus from model development to improving metastatic lesion
segmentation on PET/CT images through data quality and handling strategies. In
response, we developed targeted methods to enhance segmentation performance
tailored to the characteristics of PET/CT imaging. Our approach encompasses two
key elements. First, to address potential alignment errors between CT and PET
modalities as well as the prevalence of punctate lesions, we modified the
baseline data augmentation scheme and extended it with misalignment
augmentation. This adaptation aims to improve segmentation accuracy,
particularly for tiny metastatic lesions. Second, to tackle the variability in
image dimensions significantly affecting the prediction time, we implemented a
dynamic ensembling and test-time augmentation (TTA) strategy. This method
optimizes the use of ensembling and TTA within a 5-minute prediction time
limit, effectively leveraging the generalization potential for both small and
large images. Both of our solutions are designed to be robust across different
tracers and institutional settings, offering a general, yet imaging-specific
approach to the multi-tracer and multi-institutional challenges of the
competition. We made the challenge repository with our modifications publicly
available at \url{https://github.com/MIC-DKFZ/miccai2024_autopet3_datacentric}.