首页 > 最新文献

IEEE transactions on medical imaging最新文献

英文 中文
Multi-Organ Foundation Model for Universal Ultrasound Image Segmentation with Task Prompt and Anatomical Prior. 带有任务提示和解剖先验的通用超声图像分割多器官基础模型
Pub Date : 2024-10-03 DOI: 10.1109/TMI.2024.3472672
Haobo Chen, Yehua Cai, Changyan Wang, Lin Chen, Bo Zhang, Hong Han, Yuqing Guo, Hong Ding, Qi Zhang

Semantic segmentation of ultrasound (US) images with deep learning has played a crucial role in computer-aided disease screening, diagnosis and prognosis. However, due to the scarcity of US images and small field of view, resulting segmentation models are tailored for a specific single organ and may lack robustness, overlooking correlations among anatomical structures of multiple organs. To address these challenges, we propose the Multi-Organ FOundation (MOFO) model for universal US image segmentation. The MOFO is optimized jointly from multiple organs across various anatomical regions to overcome the data scarcity and explore correlations between multiple organs. The MOFO extracts organ-invariant representations from US images. Simultaneously, the task prompt is employed to refine organ-specific representations for segmentation predictions. Moreover, the anatomical prior is incorporated to enhance the consistency of the anatomical structures. A multi-organ US database, comprising 7039 images from 10 organs across various regions of the human body, has been established to evaluate our model. Results demonstrate that the MOFO outperforms single-organ methods in terms of the Dice coefficient, 95% Hausdorff distance and average symmetric surface distance with statistically sufficient margins. Our experiments in multi-organ universal segmentation for US images serve as a pioneering exploration of improving segmentation performance by leveraging semantic and anatomical relationships within US images of multiple organs.

利用深度学习对超声波(US)图像进行语义分割在计算机辅助疾病筛查、诊断和预后方面发挥了至关重要的作用。然而,由于 US 图像的稀缺性和小视场,由此产生的分割模型都是为特定的单一器官量身定制的,可能缺乏鲁棒性,忽略了多个器官解剖结构之间的相关性。为了应对这些挑战,我们提出了用于通用 US 图像分割的多器官基金化(MOFO)模型。MOFO 从不同解剖区域的多个器官中联合优化,以克服数据稀缺性并探索多个器官之间的相关性。MOFO 可从 US 图像中提取与器官无关的表征。同时,利用任务提示来完善特定器官的表征,以进行分割预测。此外,还纳入了解剖先验,以增强解剖结构的一致性。为了评估我们的模型,我们建立了一个多器官 US 数据库,其中包括来自人体不同区域 10 个器官的 7039 幅图像。结果表明,MOFO 在 Dice 系数、95% Hausdorff 距离和平均对称面距离方面均优于单器官方法,且在统计学上有足够的优势。我们的 US 图像多器官通用分割实验是利用 US 图像中多个器官的语义和解剖关系提高分割性能的开创性探索。
{"title":"Multi-Organ Foundation Model for Universal Ultrasound Image Segmentation with Task Prompt and Anatomical Prior.","authors":"Haobo Chen, Yehua Cai, Changyan Wang, Lin Chen, Bo Zhang, Hong Han, Yuqing Guo, Hong Ding, Qi Zhang","doi":"10.1109/TMI.2024.3472672","DOIUrl":"https://doi.org/10.1109/TMI.2024.3472672","url":null,"abstract":"<p><p>Semantic segmentation of ultrasound (US) images with deep learning has played a crucial role in computer-aided disease screening, diagnosis and prognosis. However, due to the scarcity of US images and small field of view, resulting segmentation models are tailored for a specific single organ and may lack robustness, overlooking correlations among anatomical structures of multiple organs. To address these challenges, we propose the Multi-Organ FOundation (MOFO) model for universal US image segmentation. The MOFO is optimized jointly from multiple organs across various anatomical regions to overcome the data scarcity and explore correlations between multiple organs. The MOFO extracts organ-invariant representations from US images. Simultaneously, the task prompt is employed to refine organ-specific representations for segmentation predictions. Moreover, the anatomical prior is incorporated to enhance the consistency of the anatomical structures. A multi-organ US database, comprising 7039 images from 10 organs across various regions of the human body, has been established to evaluate our model. Results demonstrate that the MOFO outperforms single-organ methods in terms of the Dice coefficient, 95% Hausdorff distance and average symmetric surface distance with statistically sufficient margins. Our experiments in multi-organ universal segmentation for US images serve as a pioneering exploration of improving segmentation performance by leveraging semantic and anatomical relationships within US images of multiple organs.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142373923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPIRiT-Diffusion: Self-Consistency Driven Diffusion Model for Accelerated MRI. SPIRiT-Diffusion:用于加速核磁共振成像的自一致性驱动扩散模型
Pub Date : 2024-10-03 DOI: 10.1109/TMI.2024.3473009
Zhuo-Xu Cui, Chentao Cao, Yue Wang, Sen Jia, Jing Cheng, Xin Liu, Hairong Zheng, Dong Liang, Yanjie Zhu

Diffusion models have emerged as a leading methodology for image generation and have proven successful in the realm of magnetic resonance imaging (MRI) reconstruction. However, existing reconstruction methods based on diffusion models are primarily formulated in the image domain, making the reconstruction quality susceptible to inaccuracies in coil sensitivity maps (CSMs). k-space interpolation methods can effectively address this issue but conventional diffusion models are not readily applicable in k-space interpolation. To overcome this challenge, we introduce a novel approach called SPIRiT-Diffusion, which is a diffusion model for k-space interpolation inspired by the iterative self-consistent SPIRiT method. Specifically, we utilize the iterative solver of the self-consistent term (i.e., k-space physical prior) in SPIRiT to formulate a novel stochastic differential equation (SDE) governing the diffusion process. Subsequently, k-space data can be interpolated by executing the diffusion process. This innovative approach highlights the optimization model's role in designing the SDE in diffusion models, enabling the diffusion process to align closely with the physics inherent in the optimization model-a concept referred to as model-driven diffusion. We evaluated the proposed SPIRiT-Diffusion method using a 3D joint intracranial and carotid vessel wall imaging dataset. The results convincingly demonstrate its superiority over image-domain reconstruction methods, achieving high reconstruction quality even at a substantial acceleration rate of 10. Our code are available at https://github.com/zhyjSIAT/SPIRiT-Diffusion.

扩散模型已成为图像生成的主要方法,并在磁共振成像(MRI)重建领域取得了成功。然而,现有的基于扩散模型的重建方法主要是在图像域中制定的,因此重建质量容易受到线圈灵敏度图(CSM)不准确的影响。k 空间插值方法可以有效解决这一问题,但传统的扩散模型在 k 空间插值中并不适用。为了克服这一难题,我们引入了一种名为 SPIRiT-Diffusion 的新方法,它是受迭代自洽 SPIRiT 方法启发而产生的 k 空间插值扩散模型。具体来说,我们利用 SPIRiT 中的自洽项(即 k 空间物理先验项)迭代求解器,制定了一个管理扩散过程的新型随机微分方程(SDE)。随后,可通过执行扩散过程对 k 空间数据进行插值。这种创新方法突出了优化模型在设计扩散模型中的 SDE 时所扮演的角色,使扩散过程与优化模型中固有的物理过程紧密结合--这一概念被称为模型驱动扩散。我们使用三维颅内和颈动脉血管壁联合成像数据集对所提出的 SPIRiT-Diffusion 方法进行了评估。结果令人信服地证明了该方法优于图像域重建方法,即使在 10 倍的大幅加速率下也能达到很高的重建质量。我们的代码见 https://github.com/zhyjSIAT/SPIRiT-Diffusion。
{"title":"SPIRiT-Diffusion: Self-Consistency Driven Diffusion Model for Accelerated MRI.","authors":"Zhuo-Xu Cui, Chentao Cao, Yue Wang, Sen Jia, Jing Cheng, Xin Liu, Hairong Zheng, Dong Liang, Yanjie Zhu","doi":"10.1109/TMI.2024.3473009","DOIUrl":"https://doi.org/10.1109/TMI.2024.3473009","url":null,"abstract":"<p><p>Diffusion models have emerged as a leading methodology for image generation and have proven successful in the realm of magnetic resonance imaging (MRI) reconstruction. However, existing reconstruction methods based on diffusion models are primarily formulated in the image domain, making the reconstruction quality susceptible to inaccuracies in coil sensitivity maps (CSMs). k-space interpolation methods can effectively address this issue but conventional diffusion models are not readily applicable in k-space interpolation. To overcome this challenge, we introduce a novel approach called SPIRiT-Diffusion, which is a diffusion model for k-space interpolation inspired by the iterative self-consistent SPIRiT method. Specifically, we utilize the iterative solver of the self-consistent term (i.e., k-space physical prior) in SPIRiT to formulate a novel stochastic differential equation (SDE) governing the diffusion process. Subsequently, k-space data can be interpolated by executing the diffusion process. This innovative approach highlights the optimization model's role in designing the SDE in diffusion models, enabling the diffusion process to align closely with the physics inherent in the optimization model-a concept referred to as model-driven diffusion. We evaluated the proposed SPIRiT-Diffusion method using a 3D joint intracranial and carotid vessel wall imaging dataset. The results convincingly demonstrate its superiority over image-domain reconstruction methods, achieving high reconstruction quality even at a substantial acceleration rate of 10. Our code are available at https://github.com/zhyjSIAT/SPIRiT-Diffusion.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142373924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three-Dimensional Variable Slab-Selective Projection Acquisition Imaging. 三维可变平板选择性投影采集成像。
Pub Date : 2024-09-30 DOI: 10.1109/TMI.2024.3460974
Jinil Park, Taehoon Shin, Jang-Yeon Park

Three-dimensional (3D) projection acquisition (PA) imaging has recently gained attention because of its advantages, such as achievability of very short echo time, less sensitivity to motion, and undersampled acquisition of projections without sacrificing spatial resolution. However, larger subjects require a stronger Nyquist criterion and are more likely to be affected by outer-volume signals outside the field of view (FOV), which significantly degrades the image quality. Here, we proposed a variable slab-selective projection acquisition (VSS-PA) method to mitigate the Nyquist criterion and effectively suppress aliasing streak artifacts in 3D PA imaging. The proposed method involves maintaining the vertical orientation of the slab-selective gradient for frequency-selective spin excitation and the readout gradient for data acquisition. As VSS-PA can selectively excite spins only in the width of the desired FOV in the projection direction during data acquisition, the effective size of the scanned object that determines the Nyquist criterion can be reduced. Additionally, unwanted signals originating from outside the FOV (e.g., aliasing streak artifacts) can be effectively avoided. The mitigation of the Nyquist criterion owing to VSS-PA was theoretically described and confirmed through numerical simulations and phantom and human lung experiments. These experiments further showed that the aliasing streak artifacts were nearly suppressed.

三维(3D)投影采集(PA)成像具有回波时间极短、对运动的敏感性较低、在不牺牲空间分辨率的情况下采集欠采样投影等优点,因此近来备受关注。然而,较大的受试者需要更强的奈奎斯特标准,而且更容易受到视野(FOV)外的外容积信号的影响,从而大大降低图像质量。在此,我们提出了一种可变板片选择性投影采集(VSS-PA)方法,以减轻奈奎斯特标准,并有效抑制三维 PA 成像中的混叠条纹伪影。该方法包括保持用于频率选择性自旋激发的板片选择梯度和用于数据采集的读出梯度的垂直方向。由于 VSS-PA 在数据采集过程中只能选择性地激发投影方向上所需 FOV 宽度内的自旋,因此可以减小决定奈奎斯特标准的扫描对象的有效尺寸。此外,还能有效避免来自 FOV 以外的不需要的信号(如混叠条纹伪影)。VSS-PA 对奈奎斯特标准的减弱进行了理论描述,并通过数值模拟、人体模型和人体肺部实验得到了证实。这些实验进一步表明,混叠条纹伪影几乎被抑制。
{"title":"Three-Dimensional Variable Slab-Selective Projection Acquisition Imaging.","authors":"Jinil Park, Taehoon Shin, Jang-Yeon Park","doi":"10.1109/TMI.2024.3460974","DOIUrl":"https://doi.org/10.1109/TMI.2024.3460974","url":null,"abstract":"<p><p>Three-dimensional (3D) projection acquisition (PA) imaging has recently gained attention because of its advantages, such as achievability of very short echo time, less sensitivity to motion, and undersampled acquisition of projections without sacrificing spatial resolution. However, larger subjects require a stronger Nyquist criterion and are more likely to be affected by outer-volume signals outside the field of view (FOV), which significantly degrades the image quality. Here, we proposed a variable slab-selective projection acquisition (VSS-PA) method to mitigate the Nyquist criterion and effectively suppress aliasing streak artifacts in 3D PA imaging. The proposed method involves maintaining the vertical orientation of the slab-selective gradient for frequency-selective spin excitation and the readout gradient for data acquisition. As VSS-PA can selectively excite spins only in the width of the desired FOV in the projection direction during data acquisition, the effective size of the scanned object that determines the Nyquist criterion can be reduced. Additionally, unwanted signals originating from outside the FOV (e.g., aliasing streak artifacts) can be effectively avoided. The mitigation of the Nyquist criterion owing to VSS-PA was theoretically described and confirmed through numerical simulations and phantom and human lung experiments. These experiments further showed that the aliasing streak artifacts were nearly suppressed.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physics-Informed DeepMRI: k-Space Interpolation Meets Heat Diffusion 物理信息 DeepMRI:k 空间插值与热扩散
Pub Date : 2024-09-18 DOI: 10.1109/TMI.2024.3462988
Zhuo-Xu Cui;Congcong Liu;Xiaohong Fan;Chentao Cao;Jing Cheng;Qingyong Zhu;Yuanyuan Liu;Sen Jia;Haifeng Wang;Yanjie Zhu;Yihang Zhou;Jianping Zhang;Qiegen Liu;Dong Liang
Recently, diffusion models have shown considerable promise for MRI reconstruction. However, extensive experimentation has revealed that these models are prone to generating artifacts due to the inherent randomness involved in generating images from pure noise. To achieve more controlled image reconstruction, we reexamine the concept of interpolatable physical priors in k-space data, focusing specifically on the interpolation of high-frequency (HF) k-space data from low-frequency (LF) k-space data. Broadly, this insight drives a shift in the generation paradigm from random noise to a more deterministic approach grounded in the existing LF k-space data. Building on this, we first establish a relationship between the interpolation of HF k-space data from LF k-space data and the reverse heat diffusion process, providing a fundamental framework for designing diffusion models that generate missing HF data. To further improve reconstruction accuracy, we integrate a traditional physics-informed k-space interpolation model into our diffusion framework as a data fidelity term. Experimental validation using publicly available datasets demonstrates that our approach significantly surpasses traditional k-space interpolation methods, deep learning-based k-space interpolation techniques, and conventional diffusion models, particularly in HF regions. Finally, we assess the generalization performance of our model across various out-of-distribution datasets. Our code are available at https://github.com/ZhuoxuCui/Heat-Diffusion.
最近,弥散模型在核磁共振成像重建方面显示出了相当大的前景。然而,大量实验表明,由于从纯噪声生成图像的固有随机性,这些模型容易产生伪影。为了实现更可控的图像重建,我们重新审视了 k 空间数据中可插值物理先验的概念,特别关注从低频 k 空间数据插值高频 k 空间数据。从广义上讲,这种洞察力促使生成范式从随机噪声转向以现有低频 k 空间数据为基础的更具确定性的方法。在此基础上,我们首先建立了从低频 k 空间数据插值高频 k 空间数据与反向热扩散过程之间的关系,为设计生成缺失高频数据的扩散模型提供了一个基本框架。为了进一步提高重建精度,我们将传统的物理信息 k 空间插值模型作为数据保真度项整合到我们的扩散框架中。使用公开数据集进行的实验验证表明,我们的方法明显优于传统的 k 空间插值方法、基于深度学习的 k 空间插值技术和传统的扩散模型,尤其是在高频区域。最后,我们评估了我们的模型在各种分布外数据集上的泛化性能。我们的代码见 https://github.com/ZhuoxuCui/Heat-Diffusion。
{"title":"Physics-Informed DeepMRI: k-Space Interpolation Meets Heat Diffusion","authors":"Zhuo-Xu Cui;Congcong Liu;Xiaohong Fan;Chentao Cao;Jing Cheng;Qingyong Zhu;Yuanyuan Liu;Sen Jia;Haifeng Wang;Yanjie Zhu;Yihang Zhou;Jianping Zhang;Qiegen Liu;Dong Liang","doi":"10.1109/TMI.2024.3462988","DOIUrl":"10.1109/TMI.2024.3462988","url":null,"abstract":"Recently, diffusion models have shown considerable promise for MRI reconstruction. However, extensive experimentation has revealed that these models are prone to generating artifacts due to the inherent randomness involved in generating images from pure noise. To achieve more controlled image reconstruction, we reexamine the concept of interpolatable physical priors in k-space data, focusing specifically on the interpolation of high-frequency (HF) k-space data from low-frequency (LF) k-space data. Broadly, this insight drives a shift in the generation paradigm from random noise to a more deterministic approach grounded in the existing LF k-space data. Building on this, we first establish a relationship between the interpolation of HF k-space data from LF k-space data and the reverse heat diffusion process, providing a fundamental framework for designing diffusion models that generate missing HF data. To further improve reconstruction accuracy, we integrate a traditional physics-informed k-space interpolation model into our diffusion framework as a data fidelity term. Experimental validation using publicly available datasets demonstrates that our approach significantly surpasses traditional k-space interpolation methods, deep learning-based k-space interpolation techniques, and conventional diffusion models, particularly in HF regions. Finally, we assess the generalization performance of our model across various out-of-distribution datasets. Our code are available at \u0000<uri>https://github.com/ZhuoxuCui/Heat-Diffusion</uri>\u0000.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"43 10","pages":"3503-3520"},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142245870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facing Differences of Similarity: Intra- and Inter-Correlation Unsupervised Learning for Chest X-Ray Anomaly Detection. 面对相似性的差异:胸部 X 射线异常检测的内部和相互关联无监督学习。
Pub Date : 2024-09-16 DOI: 10.1109/TMI.2024.3461231
Shicheng Xu, Wei Li, Zuoyong Li, Tiesong Zhao, Bob Zhang

Anomaly detection can significantly aid doctors in interpreting chest X-rays. The commonly used strategy involves utilizing the pre-trained network to extract features from normal data to establish feature representations. However, when a pre-trained network is applied to more detailed X-rays, differences of similarity can limit the robustness of these feature representations. Therefore, we propose an intra- and inter-correlation learning framework for chest X-ray anomaly detection. Firstly, to better leverage the similar anatomical structure information in chest X-rays, we introduce the Anatomical-Feature Pyramid Fusion Module for feature fusion. This module aims to obtain fusion features with both local details and global contextual information. These fusion features are initialized by a trainable feature mapper and stored in a feature bank to serve as centers for learning. Furthermore, to Facing Differences of Similarity (FDS) introduced by the pre-trained network, we propose an intra- and inter-correlation learning strategy: (1) We use intra-correlation learning to establish intra-correlation between mapped features of individual images and semantic centers, thereby initially discovering lesions; (2) We employ inter-correlation learning to establish inter-correlation between mapped features of different images, further mitigating the differences of similarity introduced by the pre-trained network, and achieving effective detection results even in diverse chest disease environments. Finally, a comparison with 18 state-of-the-art methods on three datasets demonstrates the superiority and effectiveness of the proposed method across various scenarios.

异常检测可极大地帮助医生解读胸部 X 光片。常用的策略是利用预先训练好的网络从正常数据中提取特征,从而建立特征表征。然而,当预先训练好的网络应用于更详细的 X 光片时,相似性差异会限制这些特征表征的稳健性。因此,我们提出了一种用于胸部 X 光异常检测的内部和相互关联学习框架。首先,为了更好地利用胸部 X 射线中相似的解剖结构信息,我们引入了解剖-特征金字塔融合模块进行特征融合。该模块旨在获得兼具局部细节和全局背景信息的融合特征。这些融合特征由可训练的特征映射器初始化,并存储在特征库中作为学习中心。此外,面对预训练网络引入的相似性差异(FDS),我们提出了一种内相关和间相关学习策略:(1)我们利用内相关学习在单个图像的映射特征和语义中心之间建立内相关,从而初步发现病变;(2)我们利用间相关学习在不同图像的映射特征之间建立间相关,进一步减轻预训练网络引入的相似性差异,即使在不同的胸部疾病环境中也能获得有效的检测结果。最后,在三个数据集上与 18 种最先进的方法进行了比较,证明了所提方法在各种场景下的优越性和有效性。
{"title":"Facing Differences of Similarity: Intra- and Inter-Correlation Unsupervised Learning for Chest X-Ray Anomaly Detection.","authors":"Shicheng Xu, Wei Li, Zuoyong Li, Tiesong Zhao, Bob Zhang","doi":"10.1109/TMI.2024.3461231","DOIUrl":"https://doi.org/10.1109/TMI.2024.3461231","url":null,"abstract":"<p><p>Anomaly detection can significantly aid doctors in interpreting chest X-rays. The commonly used strategy involves utilizing the pre-trained network to extract features from normal data to establish feature representations. However, when a pre-trained network is applied to more detailed X-rays, differences of similarity can limit the robustness of these feature representations. Therefore, we propose an intra- and inter-correlation learning framework for chest X-ray anomaly detection. Firstly, to better leverage the similar anatomical structure information in chest X-rays, we introduce the Anatomical-Feature Pyramid Fusion Module for feature fusion. This module aims to obtain fusion features with both local details and global contextual information. These fusion features are initialized by a trainable feature mapper and stored in a feature bank to serve as centers for learning. Furthermore, to Facing Differences of Similarity (FDS) introduced by the pre-trained network, we propose an intra- and inter-correlation learning strategy: (1) We use intra-correlation learning to establish intra-correlation between mapped features of individual images and semantic centers, thereby initially discovering lesions; (2) We employ inter-correlation learning to establish inter-correlation between mapped features of different images, further mitigating the differences of similarity introduced by the pre-trained network, and achieving effective detection results even in diverse chest disease environments. Finally, a comparison with 18 state-of-the-art methods on three datasets demonstrates the superiority and effectiveness of the proposed method across various scenarios.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ConvexAdam: Self-Configuring Dual-Optimisation-Based 3D Multitask Medical Image Registration. ConvexAdam:基于自配置双优化的三维多任务医学图像配准。
Pub Date : 2024-09-16 DOI: 10.1109/TMI.2024.3462248
Hanna Siebert, Christoph Grossbrohmer, Lasse Hansen, Mattias P Heinrich

Registration of medical image data requires methods that can align anatomical structures precisely while applying smooth and plausible transformations. Ideally, these methods should furthermore operate quickly and apply to a wide variety of tasks. Deep learning-based image registration methods usually entail an elaborate learning procedure with the need for extensive training data. However, they often struggle with versatility when aiming to apply the same approach across various anatomical regions and different imaging modalities. In this work, we present a method that extracts semantic or hand-crafted image features and uses a coupled convex optimisation followed by Adam-based instance optimisation for multitask medical image registration. We make use of pre-trained semantic feature extraction models for the individual datasets and combine them with our fast dual optimisation procedure for deformation field computation. Furthermore, we propose a very fast automatic hyperparameter selection procedure that explores many settings and ranks them on validation data to provide a self-configuring image registration framework. With our approach, we can align image data for various tasks with little learning. We conduct experiments on all available Learn2Reg challenge datasets and obtain results that are to be positioned in the upper ranks of the challenge leaderboards. github.com/multimodallearning/convexAdam.

医学图像数据的配准需要能精确对准解剖结构的方法,同时应用平滑、合理的变换。理想情况下,这些方法应能快速运行,并适用于各种任务。基于深度学习的图像配准方法通常需要复杂的学习过程,并需要大量的训练数据。然而,当要在不同的解剖区域和不同的成像模式中应用同一种方法时,这些方法往往难以实现通用性。在这项工作中,我们提出了一种提取语义或手工制作图像特征的方法,并将耦合凸优化和基于亚当的实例优化用于多任务医学图像配准。我们利用为各个数据集预先训练好的语义特征提取模型,并将其与我们的快速双重优化程序相结合,进行变形场计算。此外,我们还提出了一种非常快速的自动超参数选择程序,该程序可探索多种设置,并根据验证数据对其进行排序,从而提供一个可自行配置的图像配准框架。利用我们的方法,我们只需很少的学习就能为各种任务配准图像数据。我们在所有可用的 Learn2Reg 挑战数据集上进行了实验,并取得了在挑战排行榜上名列前茅的结果。
{"title":"ConvexAdam: Self-Configuring Dual-Optimisation-Based 3D Multitask Medical Image Registration.","authors":"Hanna Siebert, Christoph Grossbrohmer, Lasse Hansen, Mattias P Heinrich","doi":"10.1109/TMI.2024.3462248","DOIUrl":"https://doi.org/10.1109/TMI.2024.3462248","url":null,"abstract":"<p><p>Registration of medical image data requires methods that can align anatomical structures precisely while applying smooth and plausible transformations. Ideally, these methods should furthermore operate quickly and apply to a wide variety of tasks. Deep learning-based image registration methods usually entail an elaborate learning procedure with the need for extensive training data. However, they often struggle with versatility when aiming to apply the same approach across various anatomical regions and different imaging modalities. In this work, we present a method that extracts semantic or hand-crafted image features and uses a coupled convex optimisation followed by Adam-based instance optimisation for multitask medical image registration. We make use of pre-trained semantic feature extraction models for the individual datasets and combine them with our fast dual optimisation procedure for deformation field computation. Furthermore, we propose a very fast automatic hyperparameter selection procedure that explores many settings and ranks them on validation data to provide a self-configuring image registration framework. With our approach, we can align image data for various tasks with little learning. We conduct experiments on all available Learn2Reg challenge datasets and obtain results that are to be positioned in the upper ranks of the challenge leaderboards. github.com/multimodallearning/convexAdam.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-navigated 3D diffusion MRI using an optimized CAIPI sampling and structured low-rank reconstruction estimated navigator. 使用优化 CAIPI 采样和结构化低秩重建估计导航器的自导航三维弥散 MRI。
Pub Date : 2024-09-06 DOI: 10.1109/TMI.2024.3454994
Ziyu Li, Karla L Miller, Xi Chen, Mark Chiew, Wenchuan Wu

3D multi-slab acquisitions are an appealing approach for diffusion MRI because they are compatible with the imaging regime delivering optimal SNR efficiency. In conventional 3D multi-slab imaging, shot-to-shot phase variations caused by motion pose challenges due to the use of multi-shot k-space acquisition. Navigator acquisition after each imaging echo is typically employed to correct phase variations, which prolongs scan time and increases the specific absorption rate (SAR). The aim of this study is to develop a highly efficient, self-navigated method to correct for phase variations in 3D multi-slab diffusion MRI without explicitly acquiring navigators. The sampling of each shot is carefully designed to intersect with the central kz=0 plane of each slab, and the multi-shot sampling is optimized for self-navigation performance while retaining decent reconstruction quality. The kz=0 intersections from all shots are jointly used to reconstruct a 2D phase map for each shot using a structured low-rank constrained reconstruction that leverages the redundancy in shot and coil dimensions. The phase maps are used to eliminate the shot-to-shot phase inconsistency in the final 3D multi-shot reconstruction. We demonstrate the method's efficacy using retrospective simulations and prospectively acquired in-vivo experiments at 1.22 mm and 1.09 mm isotropic resolutions. Compared to conventional navigated 3D multi-slab imaging, the proposed self-navigated method achieves comparable image quality while shortening the scan time by 31.7% and improving the SNR efficiency by 15.5%. The proposed method produces comparable quality of DTI and white matter tractography to conventional navigated 3D multi-slab acquisition with a much shorter scan time.

三维多片采集是一种极具吸引力的弥散核磁共振成像方法,因为它与可提供最佳信噪比效率的成像机制相兼容。在传统的三维多平板成像中,由于使用多拍 k 空间采集,运动造成的拍间相位变化带来了挑战。通常在每次成像回波后采用导航仪采集来校正相位变化,这会延长扫描时间并增加比吸收率(SAR)。本研究的目的是开发一种高效的自导航方法,用于校正三维多片扩散磁共振成像中的相位变化,而无需明确采集导航器。每个镜头的采样都经过精心设计,以与每个板片的中心 kz=0 平面相交,多镜头采样经过优化,既能实现自导航性能,又能保持较好的重建质量。所有镜头的 kz=0 交点被联合用于重建每个镜头的二维相位图,采用结构化低秩约束重建,充分利用镜头和线圈维度的冗余。相位图用于消除最终三维多镜头重建中镜头间相位的不一致性。我们利用回顾性模拟和在 1.22 毫米和 1.09 毫米各向同性分辨率下进行的前瞻性活体实验证明了该方法的功效。与传统的导航式三维多平板成像相比,所提出的自导航方法可获得相当的图像质量,同时扫描时间缩短了 31.7%,信噪比效率提高了 15.5%。与传统的导航式三维多切片成像相比,该方法能以更短的扫描时间获得质量相当的 DTI 和白质束成像。
{"title":"Self-navigated 3D diffusion MRI using an optimized CAIPI sampling and structured low-rank reconstruction estimated navigator.","authors":"Ziyu Li, Karla L Miller, Xi Chen, Mark Chiew, Wenchuan Wu","doi":"10.1109/TMI.2024.3454994","DOIUrl":"10.1109/TMI.2024.3454994","url":null,"abstract":"<p><p>3D multi-slab acquisitions are an appealing approach for diffusion MRI because they are compatible with the imaging regime delivering optimal SNR efficiency. In conventional 3D multi-slab imaging, shot-to-shot phase variations caused by motion pose challenges due to the use of multi-shot k-space acquisition. Navigator acquisition after each imaging echo is typically employed to correct phase variations, which prolongs scan time and increases the specific absorption rate (SAR). The aim of this study is to develop a highly efficient, self-navigated method to correct for phase variations in 3D multi-slab diffusion MRI without explicitly acquiring navigators. The sampling of each shot is carefully designed to intersect with the central kz=0 plane of each slab, and the multi-shot sampling is optimized for self-navigation performance while retaining decent reconstruction quality. The kz=0 intersections from all shots are jointly used to reconstruct a 2D phase map for each shot using a structured low-rank constrained reconstruction that leverages the redundancy in shot and coil dimensions. The phase maps are used to eliminate the shot-to-shot phase inconsistency in the final 3D multi-shot reconstruction. We demonstrate the method's efficacy using retrospective simulations and prospectively acquired in-vivo experiments at 1.22 mm and 1.09 mm isotropic resolutions. Compared to conventional navigated 3D multi-slab imaging, the proposed self-navigated method achieves comparable image quality while shortening the scan time by 31.7% and improving the SNR efficiency by 15.5%. The proposed method produces comparable quality of DTI and white matter tractography to conventional navigated 3D multi-slab acquisition with a much shorter scan time.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7616616/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cohort-Individual Cooperative Learning for Multimodal Cancer Survival Analysis. 用于多模式癌症生存分析的队列-个体合作学习
Pub Date : 2024-09-06 DOI: 10.1109/TMI.2024.3455931
Huajun Zhou, Fengtao Zhou, Hao Chen

Recently, we have witnessed impressive achievements in cancer survival analysis by integrating multimodal data, e.g., pathology images and genomic profiles. However, the heterogeneity and high dimensionality of these modalities pose significant challenges for extracting discriminative representations while maintaining good generalization. In this paper, we propose a Cohortindividual Cooperative Learning (CCL) framework to advance cancer survival analysis by collaborating knowledge decomposition and cohort guidance. Specifically, first, we propose a Multimodal Knowledge Decomposition (MKD) module to explicitly decompose multimodal knowledge into four distinct components: redundancy, synergy and uniqueness of the two modalities. Such a comprehensive decomposition can enlighten the models to perceive easily overlooked yet important information, facilitating an effective multimodal fusion. Second, we propose a Cohort Guidance Modeling (CGM) to mitigate the risk of overfitting task-irrelevant information. It can promote a more comprehensive and robust understanding of the underlying multimodal data, while avoiding the pitfalls of overfitting and enhancing the generalization ability of the model. By cooperating the knowledge decomposition and cohort guidance methods, we develop a robust multimodal survival analysis model with enhanced discrimination and generalization abilities. Extensive experimental results on five cancer datasets demonstrate the effectiveness of our model in integrating multimodal data for survival analysis. The code will be publicly available soon.

最近,通过整合病理图像和基因组图谱等多模态数据,我们在癌症生存分析领域取得了令人瞩目的成就。然而,这些模式的异质性和高维性为提取具有区分性的表征并保持良好的泛化能力带来了巨大挑战。在本文中,我们提出了一个队列个体合作学习(CCL)框架,通过知识分解和队列指导的合作来推进癌症生存分析。具体来说,首先,我们提出了多模态知识分解(MKD)模块,将多模态知识明确分解为四个不同的组成部分:两种模态的冗余性、协同性和独特性。这种全面的分解可以启发模型感知容易被忽视的重要信息,从而促进有效的多模态融合。其次,我们提出了队列引导建模(CGM)来降低过度拟合任务相关信息的风险。它可以促进对基础多模态数据更全面、更稳健的理解,同时避免过度拟合的陷阱,增强模型的泛化能力。通过将知识分解与队列引导方法相结合,我们建立了一个稳健的多模态生存分析模型,并增强了模型的判别能力和泛化能力。在五个癌症数据集上的大量实验结果证明了我们的模型在整合多模态数据进行生存分析方面的有效性。代码即将公开。
{"title":"Cohort-Individual Cooperative Learning for Multimodal Cancer Survival Analysis.","authors":"Huajun Zhou, Fengtao Zhou, Hao Chen","doi":"10.1109/TMI.2024.3455931","DOIUrl":"https://doi.org/10.1109/TMI.2024.3455931","url":null,"abstract":"<p><p>Recently, we have witnessed impressive achievements in cancer survival analysis by integrating multimodal data, e.g., pathology images and genomic profiles. However, the heterogeneity and high dimensionality of these modalities pose significant challenges for extracting discriminative representations while maintaining good generalization. In this paper, we propose a Cohortindividual Cooperative Learning (CCL) framework to advance cancer survival analysis by collaborating knowledge decomposition and cohort guidance. Specifically, first, we propose a Multimodal Knowledge Decomposition (MKD) module to explicitly decompose multimodal knowledge into four distinct components: redundancy, synergy and uniqueness of the two modalities. Such a comprehensive decomposition can enlighten the models to perceive easily overlooked yet important information, facilitating an effective multimodal fusion. Second, we propose a Cohort Guidance Modeling (CGM) to mitigate the risk of overfitting task-irrelevant information. It can promote a more comprehensive and robust understanding of the underlying multimodal data, while avoiding the pitfalls of overfitting and enhancing the generalization ability of the model. By cooperating the knowledge decomposition and cohort guidance methods, we develop a robust multimodal survival analysis model with enhanced discrimination and generalization abilities. Extensive experimental results on five cancer datasets demonstrate the effectiveness of our model in integrating multimodal data for survival analysis. The code will be publicly available soon.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-dose CT image super-resolution with noise suppression based on prior degradation estimator and self-guidance mechanism. 基于先验退化估计器和自我引导机制的低剂量 CT 图像超分辨率与噪声抑制。
Pub Date : 2024-09-04 DOI: 10.1109/TMI.2024.3454268
Jianning Chi, Zhiyi Sun, Liuyi Meng, Siqi Wang, Xiaosheng Yu, Xiaolin Wei, Bin Yang

The anatomies in low-dose computer tomography (LDCT) are usually distorted during the zooming-in observation process due to the small amount of quantum. Super-resolution (SR) methods have been proposed to enhance qualities of LDCT images as post-processing approaches without increasing radiation damage to patients, but suffered from incorrect prediction of degradation information and incomplete leverage of internal connections within the 3D CT volume, resulting in the imbalance between noise removal and detail sharpening in the super-resolution results. In this paper, we propose a novel LDCT SR network where the degradation information self-parsed from the LDCT slice and the 3D anatomical information captured from the LDCT volume are integrated to guide the backbone network. The prior degradation estimator (PDE) is proposed following the contrastive learning strategy to estimate the degradation features in the LDCT images without paired low-normal dose CT images. The self-guidance fusion module (SGFM) is designed to capture anatomical features with internal 3D consistencies between the squashed images along the coronal, sagittal, and axial views of the CT volume. Finally, the features representing degradation and anatomical structures are integrated to recover the CT images with higher resolutions. We apply the proposed method to the 2016 NIH-AAPM Mayo Clinic LDCT Grand Challenge dataset and our collected LDCT dataset to evaluate its ability to recover LDCT images. Experimental results illustrate the superiority of our network concerning quantitative metrics and qualitative observations, demonstrating its potential in recovering detail-sharp and noise-free CT images with higher resolutions from the practical LDCT images.

在低剂量计算机断层扫描(LDCT)中,由于量子量较小,在放大观察过程中解剖结构通常会失真。超分辨率(SR)方法作为一种后处理方法,在不增加对患者辐射伤害的前提下提高了 LDCT 图像的质量,但由于对衰减信息的预测不准确,以及对三维 CT 容积内部联系的利用不完全,导致超分辨率结果在噪声去除和细节锐化之间不平衡。在本文中,我们提出了一种新型 LDCT SR 网络,将从 LDCT 切片中自行解析的退化信息和从 LDCT 容积中捕获的三维解剖信息整合在一起,为骨干网络提供指导。根据对比学习策略提出了先验退化估计器(PDE),以估计无配对低正常剂量 CT 图像的 LDCT 图像中的退化特征。自导向融合模块(SGFM)旨在捕捉沿 CT 容积的冠状、矢状和轴向视图的压扁图像之间具有内部三维一致性的解剖特征。最后,对代表退化和解剖结构的特征进行整合,以恢复分辨率更高的 CT 图像。我们将提出的方法应用于 2016 年 NIH-AAPM 梅奥诊所 LDCT 大挑战赛数据集和我们收集的 LDCT 数据集,以评估其恢复 LDCT 图像的能力。实验结果表明了我们的网络在定量指标和定性观察方面的优越性,证明了它在从实际 LDCT 图像中恢复细节清晰、无噪声且分辨率更高的 CT 图像方面的潜力。
{"title":"Low-dose CT image super-resolution with noise suppression based on prior degradation estimator and self-guidance mechanism.","authors":"Jianning Chi, Zhiyi Sun, Liuyi Meng, Siqi Wang, Xiaosheng Yu, Xiaolin Wei, Bin Yang","doi":"10.1109/TMI.2024.3454268","DOIUrl":"https://doi.org/10.1109/TMI.2024.3454268","url":null,"abstract":"<p><p>The anatomies in low-dose computer tomography (LDCT) are usually distorted during the zooming-in observation process due to the small amount of quantum. Super-resolution (SR) methods have been proposed to enhance qualities of LDCT images as post-processing approaches without increasing radiation damage to patients, but suffered from incorrect prediction of degradation information and incomplete leverage of internal connections within the 3D CT volume, resulting in the imbalance between noise removal and detail sharpening in the super-resolution results. In this paper, we propose a novel LDCT SR network where the degradation information self-parsed from the LDCT slice and the 3D anatomical information captured from the LDCT volume are integrated to guide the backbone network. The prior degradation estimator (PDE) is proposed following the contrastive learning strategy to estimate the degradation features in the LDCT images without paired low-normal dose CT images. The self-guidance fusion module (SGFM) is designed to capture anatomical features with internal 3D consistencies between the squashed images along the coronal, sagittal, and axial views of the CT volume. Finally, the features representing degradation and anatomical structures are integrated to recover the CT images with higher resolutions. We apply the proposed method to the 2016 NIH-AAPM Mayo Clinic LDCT Grand Challenge dataset and our collected LDCT dataset to evaluate its ability to recover LDCT images. Experimental results illustrate the superiority of our network concerning quantitative metrics and qualitative observations, demonstrating its potential in recovering detail-sharp and noise-free CT images with higher resolutions from the practical LDCT images.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LOQUAT: Low-Rank Quaternion Reconstruction for Photon-Counting CT. LOQUAT:用于光子计数 CT 的低函数四元数重建。
Pub Date : 2024-09-03 DOI: 10.1109/TMI.2024.3454174
Zefan Lin, Guotao Quan, Haixian Qu, Yanfeng Du, Jun Zhao

Photon-counting computed tomography (PCCT) may dramatically benefit clinical practice due to its versatility such as dose reduction and material characterization. However, the limited number of photons detected in each individual energy bin can induce severe noise contamination in the reconstructed image. Fortunately, the notable low-rank prior inherent in the PCCT image can guide the reconstruction to a denoised outcome. To fully excavate and leverage the intrinsic low-rankness, we propose a novel reconstruction algorithm based on quaternion representation (QR), called low-rank quaternion reconstruction (LOQUAT). First, we organize a group of nonlocal similar patches into a quaternion matrix. Then, an adjusted weighted Schatten-p norm (AWSN) is introduced and imposed on the matrix to enforce its low-rank nature. Subsequently, we formulate an AWSN-regularized model and devise an alternating direction method of multipliers (ADMM) framework to solve it. Experiments on simulated and real-world data substantiate the superiority of the LOQUAT technique over several state-of-the-art competitors in terms of both visual inspection and quantitative metrics. Moreover, our QR-based method exhibits lower computational complexity than some popular tensor representation (TR) based counterparts. Besides, the global convergence of LOQUAT is theoretically established under a mild condition. These properties bolster the robustness and practicality of LOQUAT, facilitating its application in PCCT clinical scenarios. The source code will be available at https://github.com/linzf23/LOQUAT.

光子计数计算机断层扫描(PCCT)因其多功能性(如减少剂量和材料表征),可极大地改善临床实践。然而,在每个单独的能量仓中检测到的光子数量有限,会在重建图像中产生严重的噪声污染。幸运的是,PCCT 图像中固有的显著低秩先验可以引导重建获得去噪结果。为了充分挖掘和利用固有的低秩性,我们提出了一种基于四元数表示(QR)的新型重建算法,称为低秩四元数重建(LOQUAT)。首先,我们将一组非局部相似斑块组织成一个四元数矩阵。然后,引入调整加权沙顿-p 准则(AWSN)并施加于矩阵,以强化其低秩性质。随后,我们提出了一个 AWSN 规则化模型,并设计了一个交替乘法(ADMM)框架来解决这个问题。在模拟和真实世界数据上进行的实验证明,LOQUAT 技术在目测和定量指标方面都优于几种最先进的竞争对手。此外,与一些流行的基于张量表示(TR)的方法相比,我们基于 QR 的方法具有更低的计算复杂度。此外,LOQUAT 的全局收敛性是在一个温和的条件下从理论上确定的。这些特性增强了 LOQUAT 的稳健性和实用性,有助于其在 PCCT 临床场景中的应用。源代码可在 https://github.com/linzf23/LOQUAT 上获取。
{"title":"LOQUAT: Low-Rank Quaternion Reconstruction for Photon-Counting CT.","authors":"Zefan Lin, Guotao Quan, Haixian Qu, Yanfeng Du, Jun Zhao","doi":"10.1109/TMI.2024.3454174","DOIUrl":"https://doi.org/10.1109/TMI.2024.3454174","url":null,"abstract":"<p><p>Photon-counting computed tomography (PCCT) may dramatically benefit clinical practice due to its versatility such as dose reduction and material characterization. However, the limited number of photons detected in each individual energy bin can induce severe noise contamination in the reconstructed image. Fortunately, the notable low-rank prior inherent in the PCCT image can guide the reconstruction to a denoised outcome. To fully excavate and leverage the intrinsic low-rankness, we propose a novel reconstruction algorithm based on quaternion representation (QR), called low-rank quaternion reconstruction (LOQUAT). First, we organize a group of nonlocal similar patches into a quaternion matrix. Then, an adjusted weighted Schatten-p norm (AWSN) is introduced and imposed on the matrix to enforce its low-rank nature. Subsequently, we formulate an AWSN-regularized model and devise an alternating direction method of multipliers (ADMM) framework to solve it. Experiments on simulated and real-world data substantiate the superiority of the LOQUAT technique over several state-of-the-art competitors in terms of both visual inspection and quantitative metrics. Moreover, our QR-based method exhibits lower computational complexity than some popular tensor representation (TR) based counterparts. Besides, the global convergence of LOQUAT is theoretically established under a mild condition. These properties bolster the robustness and practicality of LOQUAT, facilitating its application in PCCT clinical scenarios. The source code will be available at https://github.com/linzf23/LOQUAT.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on medical imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1