Pub Date : 2024-08-01DOI: 10.1109/TRPMS.2024.3436697
Jessica B. Hopson;Anthime Flaus;Colm J. McGinnity;Radhouene Neji;Andrew J. Reader;Alexander Hammers
Pretraining deep convolutional network mappings using natural images helps with medical imaging analysis tasks; this is important given the limited number of clinically annotated medical images. Many 2-D pretrained backbone networks, however, are currently available. This work compared 18 different backbones from 5 architecture groups (pretrained on ImageNet) for the task of assessing [18F]FDG brain positron emission tomography (PET) image quality (reconstructed at seven simulated doses), based on three clinical image quality metrics (global quality rating, pattern recognition, and diagnostic confidence). Using 2-D randomly sampled patches, up to eight patients (at three dose levels each) were used for training, with three separate patient datasets used for testing. Each backbone was trained five times with the same training and validation sets, and with six cross-folds. Training only the final fully connected layer (with ~6000–20000 trainable parameters) achieved a test mean-absolute-error (MAE) of ~0.5 (which was within the intrinsic uncertainty of clinical scoring). To compare “classical” and over-parameterized regimes, the pretrained weights of the last 40% of the network layers were then unfrozen. The MAE fell below 0.5 for 14 out of the 18 backbones assessed, including two that previously failed to train. Generally, backbones with residual units (e.g., DenseNets and ResNetV2s), were suited to this task, in terms of achieving the lowest MAE at test time (~0.45–0.5). This proof-of-concept study shows that over-parameterization may also be important for automated PET image quality assessments.
{"title":"Deep Convolutional Backbone Comparison for Automated PET Image Quality Assessment","authors":"Jessica B. Hopson;Anthime Flaus;Colm J. McGinnity;Radhouene Neji;Andrew J. Reader;Alexander Hammers","doi":"10.1109/TRPMS.2024.3436697","DOIUrl":"10.1109/TRPMS.2024.3436697","url":null,"abstract":"Pretraining deep convolutional network mappings using natural images helps with medical imaging analysis tasks; this is important given the limited number of clinically annotated medical images. Many 2-D pretrained backbone networks, however, are currently available. This work compared 18 different backbones from 5 architecture groups (pretrained on ImageNet) for the task of assessing [18F]FDG brain positron emission tomography (PET) image quality (reconstructed at seven simulated doses), based on three clinical image quality metrics (global quality rating, pattern recognition, and diagnostic confidence). Using 2-D randomly sampled patches, up to eight patients (at three dose levels each) were used for training, with three separate patient datasets used for testing. Each backbone was trained five times with the same training and validation sets, and with six cross-folds. Training only the final fully connected layer (with ~6000–20000 trainable parameters) achieved a test mean-absolute-error (MAE) of ~0.5 (which was within the intrinsic uncertainty of clinical scoring). To compare “classical” and over-parameterized regimes, the pretrained weights of the last 40% of the network layers were then unfrozen. The MAE fell below 0.5 for 14 out of the 18 backbones assessed, including two that previously failed to train. Generally, backbones with residual units (e.g., DenseNets and ResNetV2s), were suited to this task, in terms of achieving the lowest MAE at test time (~0.45–0.5). This proof-of-concept study shows that over-parameterization may also be important for automated PET image quality assessments.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 8","pages":"893-901"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over the past decades, the development of computed tomography (CT) technologies has been largely driven by the need for cardiac imaging but the temporal resolution remains insufficient for clinical CT in difficult cases and rather challenging for preclinical CT since small animals have much higher heart rates than humans. To address this challenge, here we report a semi-stationary multisource artificial intelligence (AI)-based real-time tomography (SMART) CT system. This unique scanner is featured by 29 source-detector pairs fixed on a circular track to collect X-ray signals in parallel, enabling instantaneous tomography in principle. Given the multisource architecture, the field of view covers only a cardiac region. To solve the interior problem, an AI-empowered interior tomography approach is developed to synergize sparsity-based regularization and learning-based reconstruction. To demonstrate the performance and utilities of the SMART system, extensive results are obtained in physical phantom experiments and animal studies, including dead and live rats as well as live rabbits. The reconstructed volumetric images convincingly demonstrate the merits of the SMART system using the AI-empowered interior tomography approach, enabling cardiac CT with the unprecedented temporal resolution of 33 ms, which enjoys the highest temporal resolution than the state of the art.
{"title":"Semi-Stationary Multisource AI-Powered Real-Time Tomography","authors":"Weiwen Wu;Yaohui Tang;Tianling Lv;Wenxiang Cong;Chuang Niu;Cheng Wang;Yiyan Guo;Peiqian Chen;Yunheng Chang;Ge Wang;Yan Xi","doi":"10.1109/TRPMS.2024.3433575","DOIUrl":"https://doi.org/10.1109/TRPMS.2024.3433575","url":null,"abstract":"Over the past decades, the development of computed tomography (CT) technologies has been largely driven by the need for cardiac imaging but the temporal resolution remains insufficient for clinical CT in difficult cases and rather challenging for preclinical CT since small animals have much higher heart rates than humans. To address this challenge, here we report a semi-stationary multisource artificial intelligence (AI)-based real-time tomography (SMART) CT system. This unique scanner is featured by 29 source-detector pairs fixed on a circular track to collect X-ray signals in parallel, enabling instantaneous tomography in principle. Given the multisource architecture, the field of view covers only a cardiac region. To solve the interior problem, an AI-empowered interior tomography approach is developed to synergize sparsity-based regularization and learning-based reconstruction. To demonstrate the performance and utilities of the SMART system, extensive results are obtained in physical phantom experiments and animal studies, including dead and live rats as well as live rabbits. The reconstructed volumetric images convincingly demonstrate the merits of the SMART system using the AI-empowered interior tomography approach, enabling cardiac CT with the unprecedented temporal resolution of 33 ms, which enjoys the highest temporal resolution than the state of the art.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 1","pages":"118-130"},"PeriodicalIF":4.6,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142912394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-23DOI: 10.1109/TRPMS.2024.3432194
Celia Valladares;John Barrio;Neus Cucarella;Marta Freire;Luis F. Vidal;José M. Benlloch;Antonio J. González
Positron emission tomography (PET) stands out as a highly specific molecular imaging technique. However, its detection sensitivity remains a challenge. The implementation of time-of-flight (TOF) PET technology enhances sensitivity by precisely measuring the time lapse between the annihilation photons. Moreover, by characterizing scattered (Compton) events, the effective sensitivity of PET imaging might significantly be enhanced. In this work, we present the scatter subsystem of a 2 layers preclinical TOF-PET scanner for mice head imaging. The scatter subsystem is composed of eight identical modules based on analog silicon photomultipliers (SiPMs) coupled to crystal arrays of $24times 24$