{"title":"Iteration-Dependent Networks and Losses for Unrolled Deep Learned FBSEM PET Image Reconstruction","authors":"Guillaume Corda-D’Incan, J. Schnabel, A. Reader","doi":"10.1109/NSS/MIC42677.2020.9507780","DOIUrl":null,"url":null,"abstract":"We present here an enhanced version of FBSEM-Net, a deep learned regularised model-based image reconstruction algorithm. FBSEM-Net unrolls the maximum a posteriori expectation-maximisation algorithm and replaces the regularisation step by a residual convolutional neural network. Both the gradient of the prior and the regularisation strength are learnt by the network from training data. Nonetheless, some issues arise from its original implementation that we improve upon in this work to obtain a more practical implementation. Specifically, in this implementation, two theoretical improvements are included: i) iteration-dependent networks are used which allows adaptation to varying noise levels as the number of iterations evolves, ii) iteration-dependent targets are used, so that the deep learnt regulariser remains a pure denoising step without any artificial acceleration of the algorithm. Furthermore, we present a new sequential training method for fully unrolled deep networks where the iterative reconstruction is split and the network is trained on each of its modules separately to match the total number of iterations used to reconstruct the targets. The results obtained on 2D simulated test data show that FBSEM-Net using iteration-dependent networks outperforms the original version. Additionally, we found that using iteration-dependent targets not only helps to reduce the variance for different training runs of the network, thus offering greater stability, but also gives the possibility of using a lower number of iterations for test time than what was used for training. Ultimately, we demonstrate that sequential training successfully addresses potential memory issues raised during the training of unrolled networks, without notably impacting the network's performance compared to conventional training.","PeriodicalId":6760,"journal":{"name":"2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)","volume":"2 1","pages":"1-4"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NSS/MIC42677.2020.9507780","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
We present here an enhanced version of FBSEM-Net, a deep learned regularised model-based image reconstruction algorithm. FBSEM-Net unrolls the maximum a posteriori expectation-maximisation algorithm and replaces the regularisation step by a residual convolutional neural network. Both the gradient of the prior and the regularisation strength are learnt by the network from training data. Nonetheless, some issues arise from its original implementation that we improve upon in this work to obtain a more practical implementation. Specifically, in this implementation, two theoretical improvements are included: i) iteration-dependent networks are used which allows adaptation to varying noise levels as the number of iterations evolves, ii) iteration-dependent targets are used, so that the deep learnt regulariser remains a pure denoising step without any artificial acceleration of the algorithm. Furthermore, we present a new sequential training method for fully unrolled deep networks where the iterative reconstruction is split and the network is trained on each of its modules separately to match the total number of iterations used to reconstruct the targets. The results obtained on 2D simulated test data show that FBSEM-Net using iteration-dependent networks outperforms the original version. Additionally, we found that using iteration-dependent targets not only helps to reduce the variance for different training runs of the network, thus offering greater stability, but also gives the possibility of using a lower number of iterations for test time than what was used for training. Ultimately, we demonstrate that sequential training successfully addresses potential memory issues raised during the training of unrolled networks, without notably impacting the network's performance compared to conventional training.