AutoPET Challenge III: Testing the Robustness of Generalized Dice Focal Loss trained 3D Residual UNet for FDG and PSMA Lesion Segmentation from Whole-Body PET/CT Images
{"title":"AutoPET Challenge III: Testing the Robustness of Generalized Dice Focal Loss trained 3D Residual UNet for FDG and PSMA Lesion Segmentation from Whole-Body PET/CT Images","authors":"Shadab Ahamed","doi":"arxiv-2409.10151","DOIUrl":null,"url":null,"abstract":"Automated segmentation of cancerous lesions in PET/CT scans is a crucial\nfirst step in quantitative image analysis. However, training deep learning\nmodels for segmentation with high accuracy is particularly challenging due to\nthe variations in lesion size, shape, and radiotracer uptake. These lesions can\nappear in different parts of the body, often near healthy organs that also\nexhibit considerable uptake, making the task even more complex. As a result,\ncreating an effective segmentation model for routine PET/CT image analysis is\nchallenging. In this study, we utilized a 3D Residual UNet model and employed\nthe Generalized Dice Focal Loss function to train the model on the AutoPET\nChallenge 2024 dataset. We conducted a 5-fold cross-validation and used an\naverage ensembling technique using the models from the five folds. In the\npreliminary test phase for Task-1, the average ensemble achieved a mean Dice\nSimilarity Coefficient (DSC) of 0.6687, mean false negative volume (FNV) of\n10.9522 ml and mean false positive volume (FPV) 2.9684 ml. More details about\nthe algorithm can be found on our GitHub repository:\nhttps://github.com/ahxmeds/autosegnet2024.git. The training code has been\nshared via the repository: https://github.com/ahxmeds/autopet2024.git.","PeriodicalId":501378,"journal":{"name":"arXiv - PHYS - Medical Physics","volume":"25 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Medical Physics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10151","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Automated segmentation of cancerous lesions in PET/CT scans is a crucial
first step in quantitative image analysis. However, training deep learning
models for segmentation with high accuracy is particularly challenging due to
the variations in lesion size, shape, and radiotracer uptake. These lesions can
appear in different parts of the body, often near healthy organs that also
exhibit considerable uptake, making the task even more complex. As a result,
creating an effective segmentation model for routine PET/CT image analysis is
challenging. In this study, we utilized a 3D Residual UNet model and employed
the Generalized Dice Focal Loss function to train the model on the AutoPET
Challenge 2024 dataset. We conducted a 5-fold cross-validation and used an
average ensembling technique using the models from the five folds. In the
preliminary test phase for Task-1, the average ensemble achieved a mean Dice
Similarity Coefficient (DSC) of 0.6687, mean false negative volume (FNV) of
10.9522 ml and mean false positive volume (FPV) 2.9684 ml. More details about
the algorithm can be found on our GitHub repository:
https://github.com/ahxmeds/autosegnet2024.git. The training code has been
shared via the repository: https://github.com/ahxmeds/autopet2024.git.