{"title":"探索多对比 MR 图像超分辨率的可分离注意力","authors":"Chun-Mei Feng;Yunlu Yan;Kai Yu;Yong Xu;Huazhu Fu;Jian Yang;Ling Shao","doi":"10.1109/TNNLS.2023.3253557","DOIUrl":null,"url":null,"abstract":"Super-resolving the magnetic resonance (MR) image of a target contrast under the guidance of the corresponding auxiliary contrast, which provides additional anatomical information, is a new and effective solution for fast MR imaging. However, current multi-contrast super-resolution (SR) methods tend to concatenate different contrasts directly, ignoring their relationships in different clues, e.g., in the high- and low-intensity regions. In this study, we propose a separable attention network (comprising high-intensity priority (HP) attention and low-intensity separation (LS) attention), named SANet. Our SANet could explore the areas of high- and low-intensity regions in the “forward” and “reverse” directions with the help of the auxiliary contrast while learning clearer anatomical structure and edge information for the SR of a target-contrast MR image. SANet provides three appealing benefits: First, it is the first model to explore a separable attention mechanism that uses the auxiliary contrast to predict the high- and low-intensity regions, diverting more attention to refining any uncertain details between these regions and correcting the fine areas in the reconstructed results. Second, a multistage integration module is proposed to learn the response of multi-contrast fusion at multiple stages, get the dependency between the fused representations, and boost their representation ability. Third, extensive experiments with various state-of-the-art multi-contrast SR methods on fastMRI and clinical in vivo datasets demonstrate the superiority of our model. The code is released at \n<monospace><uri>https://github.com/chunmeifeng/SANet</uri></monospace>\n.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":null,"pages":null},"PeriodicalIF":10.2000,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring Separable Attention for Multi-Contrast MR Image Super-Resolution\",\"authors\":\"Chun-Mei Feng;Yunlu Yan;Kai Yu;Yong Xu;Huazhu Fu;Jian Yang;Ling Shao\",\"doi\":\"10.1109/TNNLS.2023.3253557\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Super-resolving the magnetic resonance (MR) image of a target contrast under the guidance of the corresponding auxiliary contrast, which provides additional anatomical information, is a new and effective solution for fast MR imaging. However, current multi-contrast super-resolution (SR) methods tend to concatenate different contrasts directly, ignoring their relationships in different clues, e.g., in the high- and low-intensity regions. In this study, we propose a separable attention network (comprising high-intensity priority (HP) attention and low-intensity separation (LS) attention), named SANet. Our SANet could explore the areas of high- and low-intensity regions in the “forward” and “reverse” directions with the help of the auxiliary contrast while learning clearer anatomical structure and edge information for the SR of a target-contrast MR image. SANet provides three appealing benefits: First, it is the first model to explore a separable attention mechanism that uses the auxiliary contrast to predict the high- and low-intensity regions, diverting more attention to refining any uncertain details between these regions and correcting the fine areas in the reconstructed results. Second, a multistage integration module is proposed to learn the response of multi-contrast fusion at multiple stages, get the dependency between the fused representations, and boost their representation ability. Third, extensive experiments with various state-of-the-art multi-contrast SR methods on fastMRI and clinical in vivo datasets demonstrate the superiority of our model. The code is released at \\n<monospace><uri>https://github.com/chunmeifeng/SANet</uri></monospace>\\n.\",\"PeriodicalId\":13303,\"journal\":{\"name\":\"IEEE transactions on neural networks and learning systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":10.2000,\"publicationDate\":\"2024-02-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on neural networks and learning systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10443261/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10443261/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
在提供额外解剖信息的相应辅助对比度的引导下,对目标对比度的磁共振(MR)图像进行超分辨,是快速磁共振成像的一种新的有效解决方案。然而,目前的多对比度超分辨率(SR)方法往往直接将不同对比度串联起来,忽略了它们在不同线索中的关系,例如在高强度和低强度区域中的关系。在本研究中,我们提出了一种可分离的注意力网络(包括高强度优先注意力(HP)和低强度分离注意力(LS)),并将其命名为 SANet。我们的 SANet 可借助辅助对比度,在 "正向 "和 "反向 "探索高强度和低强度区域,同时学习目标对比 MR 图像 SR 的更清晰的解剖结构和边缘信息。SANet 有三个吸引人的优点:首先,它是首个探索可分离注意力机制的模型,该机制利用辅助对比度来预测高强度和低强度区域,从而将更多注意力转移到完善这些区域之间的任何不确定细节上,并修正重建结果中的精细区域。其次,提出了多阶段融合模块,以学习多对比度融合在多个阶段的响应,获得融合表征之间的依赖关系,提高其表征能力。第三,在 fastMRI 和临床活体数据集上与各种最先进的多对比度 SR 方法进行了大量实验,证明了我们模型的优越性。代码发布于 https://github.com/chunmeifeng/SANet。
Exploring Separable Attention for Multi-Contrast MR Image Super-Resolution
Super-resolving the magnetic resonance (MR) image of a target contrast under the guidance of the corresponding auxiliary contrast, which provides additional anatomical information, is a new and effective solution for fast MR imaging. However, current multi-contrast super-resolution (SR) methods tend to concatenate different contrasts directly, ignoring their relationships in different clues, e.g., in the high- and low-intensity regions. In this study, we propose a separable attention network (comprising high-intensity priority (HP) attention and low-intensity separation (LS) attention), named SANet. Our SANet could explore the areas of high- and low-intensity regions in the “forward” and “reverse” directions with the help of the auxiliary contrast while learning clearer anatomical structure and edge information for the SR of a target-contrast MR image. SANet provides three appealing benefits: First, it is the first model to explore a separable attention mechanism that uses the auxiliary contrast to predict the high- and low-intensity regions, diverting more attention to refining any uncertain details between these regions and correcting the fine areas in the reconstructed results. Second, a multistage integration module is proposed to learn the response of multi-contrast fusion at multiple stages, get the dependency between the fused representations, and boost their representation ability. Third, extensive experiments with various state-of-the-art multi-contrast SR methods on fastMRI and clinical in vivo datasets demonstrate the superiority of our model. The code is released at
https://github.com/chunmeifeng/SANet
.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.