{"title":"L2SR:通过强化学习为加速核磁共振成像学会采样和重建","authors":"Pu Yang and Bin Dong","doi":"10.1088/1361-6420/ad3b34","DOIUrl":null,"url":null,"abstract":"Magnetic resonance imaging (MRI) is a widely used medical imaging technique, but its long acquisition time can be a limiting factor in clinical settings. To address this issue, researchers have been exploring ways to reduce the acquisition time while maintaining the reconstruction quality. Previous works have focused on finding either sparse samplers with a fixed reconstructor or finding reconstructors with a fixed sampler. However, these approaches do not fully utilize the potential of joint learning of samplers and reconstructors. In this paper, we propose an alternating training framework for jointly learning a good pair of samplers and reconstructors via deep reinforcement learning. In particular, we consider the process of MRI sampling as a sampling trajectory controlled by a sampler, and introduce a novel sparse-reward partially observed Markov decision process (POMDP) to formulate the MRI sampling trajectory. Compared to the dense-reward POMDP used in existing works, the proposed sparse-reward POMDP is more computationally efficient and has a provable advantage. Moreover, the proposed framework, called learning to sample and reconstruct (L2SR), overcomes the training mismatch problem that arises in previous methods that use dense-reward POMDP. By alternately updating samplers and reconstructors, L2SR learns a pair of samplers and reconstructors that achieve state-of-the-art reconstruction performances on the fastMRI dataset. Codes are available at https://github.com/yangpuPKU/L2SR-Learning-to-Sample-and-Reconstruct.","PeriodicalId":50275,"journal":{"name":"Inverse Problems","volume":"14 1","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"L2SR: learning to sample and reconstruct for accelerated MRI via reinforcement learning\",\"authors\":\"Pu Yang and Bin Dong\",\"doi\":\"10.1088/1361-6420/ad3b34\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Magnetic resonance imaging (MRI) is a widely used medical imaging technique, but its long acquisition time can be a limiting factor in clinical settings. To address this issue, researchers have been exploring ways to reduce the acquisition time while maintaining the reconstruction quality. Previous works have focused on finding either sparse samplers with a fixed reconstructor or finding reconstructors with a fixed sampler. However, these approaches do not fully utilize the potential of joint learning of samplers and reconstructors. In this paper, we propose an alternating training framework for jointly learning a good pair of samplers and reconstructors via deep reinforcement learning. In particular, we consider the process of MRI sampling as a sampling trajectory controlled by a sampler, and introduce a novel sparse-reward partially observed Markov decision process (POMDP) to formulate the MRI sampling trajectory. Compared to the dense-reward POMDP used in existing works, the proposed sparse-reward POMDP is more computationally efficient and has a provable advantage. Moreover, the proposed framework, called learning to sample and reconstruct (L2SR), overcomes the training mismatch problem that arises in previous methods that use dense-reward POMDP. By alternately updating samplers and reconstructors, L2SR learns a pair of samplers and reconstructors that achieve state-of-the-art reconstruction performances on the fastMRI dataset. Codes are available at https://github.com/yangpuPKU/L2SR-Learning-to-Sample-and-Reconstruct.\",\"PeriodicalId\":50275,\"journal\":{\"name\":\"Inverse Problems\",\"volume\":\"14 1\",\"pages\":\"\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2024-04-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Inverse Problems\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1088/1361-6420/ad3b34\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Inverse Problems","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1088/1361-6420/ad3b34","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
L2SR: learning to sample and reconstruct for accelerated MRI via reinforcement learning
Magnetic resonance imaging (MRI) is a widely used medical imaging technique, but its long acquisition time can be a limiting factor in clinical settings. To address this issue, researchers have been exploring ways to reduce the acquisition time while maintaining the reconstruction quality. Previous works have focused on finding either sparse samplers with a fixed reconstructor or finding reconstructors with a fixed sampler. However, these approaches do not fully utilize the potential of joint learning of samplers and reconstructors. In this paper, we propose an alternating training framework for jointly learning a good pair of samplers and reconstructors via deep reinforcement learning. In particular, we consider the process of MRI sampling as a sampling trajectory controlled by a sampler, and introduce a novel sparse-reward partially observed Markov decision process (POMDP) to formulate the MRI sampling trajectory. Compared to the dense-reward POMDP used in existing works, the proposed sparse-reward POMDP is more computationally efficient and has a provable advantage. Moreover, the proposed framework, called learning to sample and reconstruct (L2SR), overcomes the training mismatch problem that arises in previous methods that use dense-reward POMDP. By alternately updating samplers and reconstructors, L2SR learns a pair of samplers and reconstructors that achieve state-of-the-art reconstruction performances on the fastMRI dataset. Codes are available at https://github.com/yangpuPKU/L2SR-Learning-to-Sample-and-Reconstruct.
期刊介绍:
An interdisciplinary journal combining mathematical and experimental papers on inverse problems with theoretical, numerical and practical approaches to their solution.
As well as applied mathematicians, physical scientists and engineers, the readership includes those working in geophysics, radar, optics, biology, acoustics, communication theory, signal processing and imaging, among others.
The emphasis is on publishing original contributions to methods of solving mathematical, physical and applied problems. To be publishable in this journal, papers must meet the highest standards of scientific quality, contain significant and original new science and should present substantial advancement in the field. Due to the broad scope of the journal, we require that authors provide sufficient introductory material to appeal to the wide readership and that articles which are not explicitly applied include a discussion of possible applications.