首页 > 最新文献

IEEE transactions on medical imaging最新文献

英文 中文
M₂DC: A Meta-Learning Framework for Generalizable Diagnostic Classification of Major Depressive Disorder M2DC:重度抑郁障碍通用诊断分类的元学习框架
Pub Date : 2024-09-16 DOI: 10.1109/TMI.2024.3461312
Jianpo Su;Bo Wang;Zhipeng Fan;Yifan Zhang;Ling-Li Zeng;Hui Shen;Dewen Hu
Psychiatric diseases are bringing heavy burdens for both individual health and social stability. The accurate and timely diagnosis of the diseases is essential for effective treatment and intervention. Thanks to the rapid development of brain imaging technology and machine learning algorithms, diagnostic classification of psychiatric diseases can be achieved based on brain images. However, due to divergences in scanning machines or parameters, the generalization capability of diagnostic classification models has always been an issue. We propose Meta-learning with Meta batch normalization and Distance Constraint (M2DC) for training diagnostic classification models. The framework can simulate the train-test domain shift situation and promote intra-class cohesion, as well as inter-class separation, which can lead to clearer classification margins and more generalizable models. To better encode dynamic brain graphs, we propose a concatenated spatiotemporal attention graph isomorphism network (CSTAGIN) as the backbone. The network is trained for the diagnostic classification of major depressive disorder (MDD) based on multi-site brain graphs. Extensive experiments on brain images from over 3261 subjects show that models trained by M2DC achieve the best performance on cross-site diagnostic classification tasks compared to various contemporary domain generalization methods and SOTA studies. The proposed M2DC is by far the first framework for multi-source closed-set domain generalizable training of diagnostic classification models for MDD and the trained models can be applied to reliable auxiliary diagnosis on novel data.
{"title":"M₂DC: A Meta-Learning Framework for Generalizable Diagnostic Classification of Major Depressive Disorder","authors":"Jianpo Su;Bo Wang;Zhipeng Fan;Yifan Zhang;Ling-Li Zeng;Hui Shen;Dewen Hu","doi":"10.1109/TMI.2024.3461312","DOIUrl":"10.1109/TMI.2024.3461312","url":null,"abstract":"Psychiatric diseases are bringing heavy burdens for both individual health and social stability. The accurate and timely diagnosis of the diseases is essential for effective treatment and intervention. Thanks to the rapid development of brain imaging technology and machine learning algorithms, diagnostic classification of psychiatric diseases can be achieved based on brain images. However, due to divergences in scanning machines or parameters, the generalization capability of diagnostic classification models has always been an issue. We propose Meta-learning with Meta batch normalization and Distance Constraint (M2DC) for training diagnostic classification models. The framework can simulate the train-test domain shift situation and promote intra-class cohesion, as well as inter-class separation, which can lead to clearer classification margins and more generalizable models. To better encode dynamic brain graphs, we propose a concatenated spatiotemporal attention graph isomorphism network (CSTAGIN) as the backbone. The network is trained for the diagnostic classification of major depressive disorder (MDD) based on multi-site brain graphs. Extensive experiments on brain images from over 3261 subjects show that models trained by M2DC achieve the best performance on cross-site diagnostic classification tasks compared to various contemporary domain generalization methods and SOTA studies. The proposed M2DC is by far the first framework for multi-source closed-set domain generalizable training of diagnostic classification models for MDD and the trained models can be applied to reliable auxiliary diagnosis on novel data.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"855-867"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142236361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ConvexAdam: Self-Configuring Dual-Optimization-Based 3D Multitask Medical Image Registration ConvexAdam:基于自配置双优化的三维多任务医学图像配准。
Pub Date : 2024-09-16 DOI: 10.1109/TMI.2024.3462248
Hanna Siebert;Christoph Großbröhmer;Lasse Hansen;Mattias P. Heinrich
Registration of medical image data requires methods that can align anatomical structures precisely while applying smooth and plausible transformations. Ideally, these methods should furthermore operate quickly and apply to a wide variety of tasks. Deep learning-based image registration methods usually entail an elaborate learning procedure with the need for extensive training data. However, they often struggle with versatility when aiming to apply the same approach across various anatomical regions and different imaging modalities. In this work, we present a method that extracts semantic or hand-crafted image features and uses a coupled convex optimisation followed by Adam-based instance optimisation for multitask medical image registration. We make use of pre-trained semantic feature extraction models for the individual datasets and combine them with our fast dual optimisation procedure for deformation field computation. Furthermore, we propose a very fast automatic hyperparameter selection procedure that explores many settings and ranks them on validation data to provide a self-configuring image registration framework. With our approach, we can align image data for various tasks with little learning. We conduct experiments on all available Learn2Reg challenge datasets and obtain results that are to be positioned in the upper ranks of the challenge leaderboards. http://github.com/multimodallearning/convexAdam
医学图像数据的配准需要能精确对准解剖结构的方法,同时应用平滑、合理的变换。理想情况下,这些方法应能快速运行,并适用于各种任务。基于深度学习的图像配准方法通常需要复杂的学习过程,并需要大量的训练数据。然而,当要在不同的解剖区域和不同的成像模式中应用同一种方法时,这些方法往往难以实现通用性。在这项工作中,我们提出了一种提取语义或手工制作图像特征的方法,并将耦合凸优化和基于亚当的实例优化用于多任务医学图像配准。我们利用为各个数据集预先训练好的语义特征提取模型,并将其与我们的快速双重优化程序相结合,进行变形场计算。此外,我们还提出了一种非常快速的自动超参数选择程序,该程序可探索多种设置,并根据验证数据对其进行排序,从而提供一个可自行配置的图像配准框架。利用我们的方法,我们只需很少的学习就能为各种任务配准图像数据。我们在所有可用的 Learn2Reg 挑战数据集上进行了实验,并取得了在挑战排行榜上名列前茅的结果。
{"title":"ConvexAdam: Self-Configuring Dual-Optimization-Based 3D Multitask Medical Image Registration","authors":"Hanna Siebert;Christoph Großbröhmer;Lasse Hansen;Mattias P. Heinrich","doi":"10.1109/TMI.2024.3462248","DOIUrl":"10.1109/TMI.2024.3462248","url":null,"abstract":"Registration of medical image data requires methods that can align anatomical structures precisely while applying smooth and plausible transformations. Ideally, these methods should furthermore operate quickly and apply to a wide variety of tasks. Deep learning-based image registration methods usually entail an elaborate learning procedure with the need for extensive training data. However, they often struggle with versatility when aiming to apply the same approach across various anatomical regions and different imaging modalities. In this work, we present a method that extracts semantic or hand-crafted image features and uses a coupled convex optimisation followed by Adam-based instance optimisation for multitask medical image registration. We make use of pre-trained semantic feature extraction models for the individual datasets and combine them with our fast dual optimisation procedure for deformation field computation. Furthermore, we propose a very fast automatic hyperparameter selection procedure that explores many settings and ranks them on validation data to provide a self-configuring image registration framework. With our approach, we can align image data for various tasks with little learning. We conduct experiments on all available Learn2Reg challenge datasets and obtain results that are to be positioned in the upper ranks of the challenge leaderboards. <uri>http://github.com/multimodallearning/convexAdam</uri>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"738-748"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10681158","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multi-Perspective Self-Supervised Generative Adversarial Network for FS to FFPE Stain Transfer 用于 FS 到 FFPE 染色转移的多视角自监督生成对抗网络
Pub Date : 2024-09-16 DOI: 10.1109/TMI.2024.3460795
Yiyang Lin;Yifeng Wang;Zijie Fang;Zexin Li;Xianchao Guan;Danling Jiang;Yongbing Zhang
In clinical practice, frozen section (FS) images can be utilized to obtain the immediate pathological results of the patients in operation due to their fast production speed. However, compared with the formalin-fixed and paraffin-embedded (FFPE) images, the FS images greatly suffer from poor quality. Thus, it is of great significance to transfer the FS image to the FFPE one, which enables pathologists to observe high-quality images in operation. However, obtaining the paired FS and FFPE images is quite hard, so it is difficult to obtain accurate results using supervised methods. Apart from this, the FS to FFPE stain transfer faces many challenges. Firstly, the number and position of nuclei scattered throughout the image are hard to maintain during the transfer process. Secondly, transferring the blurry FS images to the clear FFPE ones is quite challenging. Thirdly, compared with the center regions of each patch, the edge regions are harder to transfer. To overcome these problems, a multi-perspective self-supervised GAN, incorporating three auxiliary tasks, is proposed to improve the performance of FS to FFPE stain transfer. Concretely, a nucleus consistency constraint is designed to enable the high-fidelity of nuclei, an FFPE guided image deblurring is proposed for improving the clarity, and a multi-field-of-view consistency constraint is designed to better generate the edge regions. Objective indicators and pathologists’ evaluation for experiments on the five datasets across different countries have demonstrated the effectiveness of our method. In addition, the validation in the downstream task of microsatellite instability prediction has also proved the performance improvement by transferring the FS images to FFPE ones. Our code link is https://github.com/linyiyang98/Self-Supervised-FS2FFPE.git.
{"title":"A Multi-Perspective Self-Supervised Generative Adversarial Network for FS to FFPE Stain Transfer","authors":"Yiyang Lin;Yifeng Wang;Zijie Fang;Zexin Li;Xianchao Guan;Danling Jiang;Yongbing Zhang","doi":"10.1109/TMI.2024.3460795","DOIUrl":"10.1109/TMI.2024.3460795","url":null,"abstract":"In clinical practice, frozen section (FS) images can be utilized to obtain the immediate pathological results of the patients in operation due to their fast production speed. However, compared with the formalin-fixed and paraffin-embedded (FFPE) images, the FS images greatly suffer from poor quality. Thus, it is of great significance to transfer the FS image to the FFPE one, which enables pathologists to observe high-quality images in operation. However, obtaining the paired FS and FFPE images is quite hard, so it is difficult to obtain accurate results using supervised methods. Apart from this, the FS to FFPE stain transfer faces many challenges. Firstly, the number and position of nuclei scattered throughout the image are hard to maintain during the transfer process. Secondly, transferring the blurry FS images to the clear FFPE ones is quite challenging. Thirdly, compared with the center regions of each patch, the edge regions are harder to transfer. To overcome these problems, a multi-perspective self-supervised GAN, incorporating three auxiliary tasks, is proposed to improve the performance of FS to FFPE stain transfer. Concretely, a nucleus consistency constraint is designed to enable the high-fidelity of nuclei, an FFPE guided image deblurring is proposed for improving the clarity, and a multi-field-of-view consistency constraint is designed to better generate the edge regions. Objective indicators and pathologists’ evaluation for experiments on the five datasets across different countries have demonstrated the effectiveness of our method. In addition, the validation in the downstream task of microsatellite instability prediction has also proved the performance improvement by transferring the FS images to FFPE ones. Our code link is <uri>https://github.com/linyiyang98/Self-Supervised-FS2FFPE.git</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"774-788"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142236364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prototype-Guided Graph Reasoning Network for Few-Shot Medical Image Segmentation 原型引导的图推理网络用于少量医疗图像分割
Pub Date : 2024-09-13 DOI: 10.1109/TMI.2024.3459943
Wendong Huang;Jinwu Hu;Junhao Xiao;Yang Wei;Xiuli Bi;Bin Xiao
Few-shot semantic segmentation (FSS) is of tremendous potential for data-scarce scenarios, particularly in medical segmentation tasks with merely a few labeled data. Most of the existing FSS methods typically distinguish query objects with the guidance of support prototypes. However, the variances in appearance and scale between support and query objects from the same anatomical class are often exceedingly considerable in practical clinical scenarios, thus resulting in undesirable query segmentation masks. To tackle the aforementioned challenge, we propose a novel prototype-guided graph reasoning network (PGRNet) to explicitly explore potential contextual relationships in structured query images. Specifically, a prototype-guided graph reasoning module is proposed to perform information interaction on the query graph under the guidance of support prototypes to fully exploit the structural properties of query images to overcome intra-class variances. Moreover, instead of fixed support prototypes, a dynamic prototype generation mechanism is devised to yield a collection of dynamic support prototypes by mining rich contextual information from support images to further boost the efficiency of information interaction between support and query branches. Equipped with the proposed two components, PGRNet can learn abundant contextual representations for query images and is therefore more resilient to object variations. We validate our method on three publicly available medical segmentation datasets, namely CHAOS-T2, MS-CMRSeg, and Synapse. Experiments indicate that the proposed PGRNet outperforms previous FSS methods by a considerable margin and establishes a new state-of-the-art performance.
{"title":"Prototype-Guided Graph Reasoning Network for Few-Shot Medical Image Segmentation","authors":"Wendong Huang;Jinwu Hu;Junhao Xiao;Yang Wei;Xiuli Bi;Bin Xiao","doi":"10.1109/TMI.2024.3459943","DOIUrl":"10.1109/TMI.2024.3459943","url":null,"abstract":"Few-shot semantic segmentation (FSS) is of tremendous potential for data-scarce scenarios, particularly in medical segmentation tasks with merely a few labeled data. Most of the existing FSS methods typically distinguish query objects with the guidance of support prototypes. However, the variances in appearance and scale between support and query objects from the same anatomical class are often exceedingly considerable in practical clinical scenarios, thus resulting in undesirable query segmentation masks. To tackle the aforementioned challenge, we propose a novel prototype-guided graph reasoning network (PGRNet) to explicitly explore potential contextual relationships in structured query images. Specifically, a prototype-guided graph reasoning module is proposed to perform information interaction on the query graph under the guidance of support prototypes to fully exploit the structural properties of query images to overcome intra-class variances. Moreover, instead of fixed support prototypes, a dynamic prototype generation mechanism is devised to yield a collection of dynamic support prototypes by mining rich contextual information from support images to further boost the efficiency of information interaction between support and query branches. Equipped with the proposed two components, PGRNet can learn abundant contextual representations for query images and is therefore more resilient to object variations. We validate our method on three publicly available medical segmentation datasets, namely CHAOS-T2, MS-CMRSeg, and Synapse. Experiments indicate that the proposed PGRNet outperforms previous FSS methods by a considerable margin and establishes a new state-of-the-art performance.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"761-773"},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142231334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prior-Knowledge Embedded U-Net-Based Fully Automatic Vessel Wall Volume Measurement of the Carotid Artery in 3D Ultrasound Image 基于先验知识的嵌入式 U-Net 全自动测量三维超声图像中的颈动脉血管壁容积
Pub Date : 2024-09-10 DOI: 10.1109/TMI.2024.3457245
Zheng Yue;Jiayao Jiang;Wenguang Hou;Quan Zhou;J. David Spence;Aaron Fenster;Wu Qiu;Mingyue Ding
The vessel-wall-volume (VWV) measured based on three-dimensional (3D) carotid artery (CA) ultrasound (US) images can help to assess carotid atherosclerosis and manage patients at risk for stroke. Manual involvement for measurement work is subjective and requires well-trained operators, and fully automatic measurement tools are not yet available. Thereby, we proposed a fully automatic VWV measurement framework (Auto-VWV) using a CA prior-knowledge embedded U-Net (CAP-UNet) to measure the VWV from 3D CA US images without manual intervention. The Auto-VWV framework is designed to improve the repeated VWV measuring consistency, which resulted in the first fully automatic framework for VWV measurement. CAP-UNet is developed to improve segmentation accuracy on the whole CA, which composed of a U-Net type backbone and three additional prior-knowledge learning modules. Specifically, a continuity learning module is used to learn the spatial continuity of the arteries in a sequence of image slices. A voxel evolution learning module was designed to learn the evolution of the artery in adjacent slices, and a topology learning module was used to learn the unique topology of the carotid artery. In two 3D CA US datasets, CAP-UNet architecture achieved state-of-the-art performance compared to eight competing models. Furthermore, CAP-UNet-based Auto-VWV achieved better accuracy and consistency than Auto-VWV based on competing models in the simulated repeated measurement. Finally, using 10 pairs of real repeatedly scanned samples, Auto-VWV achieved better VWV measurement reproducibility than intra- and inter-operator manual measurements. The code is available at https://github.com/Yue9603/Auto-VWV.
{"title":"Prior-Knowledge Embedded U-Net-Based Fully Automatic Vessel Wall Volume Measurement of the Carotid Artery in 3D Ultrasound Image","authors":"Zheng Yue;Jiayao Jiang;Wenguang Hou;Quan Zhou;J. David Spence;Aaron Fenster;Wu Qiu;Mingyue Ding","doi":"10.1109/TMI.2024.3457245","DOIUrl":"10.1109/TMI.2024.3457245","url":null,"abstract":"The vessel-wall-volume (VWV) measured based on three-dimensional (3D) carotid artery (CA) ultrasound (US) images can help to assess carotid atherosclerosis and manage patients at risk for stroke. Manual involvement for measurement work is subjective and requires well-trained operators, and fully automatic measurement tools are not yet available. Thereby, we proposed a fully automatic VWV measurement framework (Auto-VWV) using a CA prior-knowledge embedded U-Net (CAP-UNet) to measure the VWV from 3D CA US images without manual intervention. The Auto-VWV framework is designed to improve the repeated VWV measuring consistency, which resulted in the first fully automatic framework for VWV measurement. CAP-UNet is developed to improve segmentation accuracy on the whole CA, which composed of a U-Net type backbone and three additional prior-knowledge learning modules. Specifically, a continuity learning module is used to learn the spatial continuity of the arteries in a sequence of image slices. A voxel evolution learning module was designed to learn the evolution of the artery in adjacent slices, and a topology learning module was used to learn the unique topology of the carotid artery. In two 3D CA US datasets, CAP-UNet architecture achieved state-of-the-art performance compared to eight competing models. Furthermore, CAP-UNet-based Auto-VWV achieved better accuracy and consistency than Auto-VWV based on competing models in the simulated repeated measurement. Finally, using 10 pairs of real repeatedly scanned samples, Auto-VWV achieved better VWV measurement reproducibility than intra- and inter-operator manual measurements. The code is available at <uri>https://github.com/Yue9603/Auto-VWV</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"711-727"},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142166142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Point Cloud Registration in Laparoscopic Liver Surgery Using Keypoint Correspondence Registration Network 利用关键点对应注册网络在腹腔镜肝脏手术中进行点云注册
Pub Date : 2024-09-10 DOI: 10.1109/TMI.2024.3457228
Yirui Zhang;Yanni Zou;Peter X. Liu
Laparoscopic liver surgery is a newly developed minimally invasive technique and represents an inevitable trend in the future development of surgical methods. By using augmented reality (AR) technology to overlay preoperative CT models with intraoperative laparoscopic videos, surgeons can accurately locate blood vessels and tumors, significantly enhancing the safety and precision of surgeries. Point cloud registration technology is key to achieving this effect. However, there are two major challenges in registering the CT model with the point cloud surface reconstructed from intraoperative laparoscopy. First, the surface features of the organ are not prominent. Second, due to the limited field of view of the laparoscope, the reconstructed surface typically represents only a very small portion of the entire organ. To address these issues, this paper proposes the keypoint correspondence registration network (KCR-Net). This network first uses the neighborhood feature fusion module (NFFM) to aggregate and interact features from different regions and structures within a pair of point clouds to obtain comprehensive feature representations. Then, through correspondence generation, it directly generates keypoints and their corresponding weights, with keypoints located in the common structures of the point clouds to be registered, and corresponding weights learned automatically by the network. This approach enables accurate point cloud registration even under conditions of extremely low overlap. Experiments conducted on the ModelNet40, 3Dircadb, DePoLL demonstrate that our method achieves excellent registration accuracy and is capable of meeting the requirements of real-world scenarios.
{"title":"Point Cloud Registration in Laparoscopic Liver Surgery Using Keypoint Correspondence Registration Network","authors":"Yirui Zhang;Yanni Zou;Peter X. Liu","doi":"10.1109/TMI.2024.3457228","DOIUrl":"10.1109/TMI.2024.3457228","url":null,"abstract":"Laparoscopic liver surgery is a newly developed minimally invasive technique and represents an inevitable trend in the future development of surgical methods. By using augmented reality (AR) technology to overlay preoperative CT models with intraoperative laparoscopic videos, surgeons can accurately locate blood vessels and tumors, significantly enhancing the safety and precision of surgeries. Point cloud registration technology is key to achieving this effect. However, there are two major challenges in registering the CT model with the point cloud surface reconstructed from intraoperative laparoscopy. First, the surface features of the organ are not prominent. Second, due to the limited field of view of the laparoscope, the reconstructed surface typically represents only a very small portion of the entire organ. To address these issues, this paper proposes the keypoint correspondence registration network (KCR-Net). This network first uses the neighborhood feature fusion module (NFFM) to aggregate and interact features from different regions and structures within a pair of point clouds to obtain comprehensive feature representations. Then, through correspondence generation, it directly generates keypoints and their corresponding weights, with keypoints located in the common structures of the point clouds to be registered, and corresponding weights learned automatically by the network. This approach enables accurate point cloud registration even under conditions of extremely low overlap. Experiments conducted on the ModelNet40, 3Dircadb, DePoLL demonstrate that our method achieves excellent registration accuracy and is capable of meeting the requirements of real-world scenarios.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"749-760"},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142166141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Tracking Prior to Localization Workflow for Ultrasound Localization Microscopy 超声定位显微镜定位前的跟踪工作流程
Pub Date : 2024-09-09 DOI: 10.1109/TMI.2024.3456676
Alexis Leconte;Jonathan Porée;Brice Rauby;Alice Wu;Nin Ghigo;Paul Xing;Stephen Lee;Chloé Bourquin;Gerardo Ramos-Palacios;Abbas F. Sadikot;Jean Provost
Ultrasound Localization Microscopy (ULM) has proven effective in resolving microvascular structures and local mean velocities at sub-diffraction-limited scales, offering high-resolution imaging capabilities. Dynamic ULM (DULM) enables the creation of angiography or velocity movies throughout cardiac cycles. Currently, these techniques rely on a Localization-and-Tracking (LAT) workflow consisting in detecting microbubbles (MB) in the frames before pairing them to generate tracks. While conventional LAT methods perform well at low concentrations, they suffer from longer acquisition times and degraded localization and tracking accuracy at higher concentrations, leading to biased angiogram reconstruction and velocity estimation. In this study, we propose a novel approach to address these challenges by reversing the current workflow. The proposed method, Tracking-and-Localization (TAL), relies on first tracking the MB and then performing localization. Through comprehensive benchmarking using both in silico and in vivo experiments and employing various metrics to quantify ULM angiography and velocity maps, we demonstrate that the TAL method consistently outperforms the reference LAT workflow. Moreover, when applied to DULM, TAL successfully extracts velocity variations along the cardiac cycle with improved repeatability. The findings of this work highlight the effectiveness of the TAL approach in overcoming the limitations of conventional LAT methods, providing enhanced ULM angiography and velocity imaging.
{"title":"A Tracking Prior to Localization Workflow for Ultrasound Localization Microscopy","authors":"Alexis Leconte;Jonathan Porée;Brice Rauby;Alice Wu;Nin Ghigo;Paul Xing;Stephen Lee;Chloé Bourquin;Gerardo Ramos-Palacios;Abbas F. Sadikot;Jean Provost","doi":"10.1109/TMI.2024.3456676","DOIUrl":"10.1109/TMI.2024.3456676","url":null,"abstract":"Ultrasound Localization Microscopy (ULM) has proven effective in resolving microvascular structures and local mean velocities at sub-diffraction-limited scales, offering high-resolution imaging capabilities. Dynamic ULM (DULM) enables the creation of angiography or velocity movies throughout cardiac cycles. Currently, these techniques rely on a Localization-and-Tracking (LAT) workflow consisting in detecting microbubbles (MB) in the frames before pairing them to generate tracks. While conventional LAT methods perform well at low concentrations, they suffer from longer acquisition times and degraded localization and tracking accuracy at higher concentrations, leading to biased angiogram reconstruction and velocity estimation. In this study, we propose a novel approach to address these challenges by reversing the current workflow. The proposed method, Tracking-and-Localization (TAL), relies on first tracking the MB and then performing localization. Through comprehensive benchmarking using both in silico and in vivo experiments and employing various metrics to quantify ULM angiography and velocity maps, we demonstrate that the TAL method consistently outperforms the reference LAT workflow. Moreover, when applied to DULM, TAL successfully extracts velocity variations along the cardiac cycle with improved repeatability. The findings of this work highlight the effectiveness of the TAL approach in overcoming the limitations of conventional LAT methods, providing enhanced ULM angiography and velocity imaging.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"698-710"},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142160541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Semantically-Consistent Deformable 2D-3D Registration for 3D Craniofacial Structure Estimation From a Single-View Lateral Cephalometric Radiograph 实现语义一致的可变形 2D-3D 注册,根据单视角头侧 X 光片进行三维颅面结构估算
Pub Date : 2024-09-09 DOI: 10.1109/TMI.2024.3456251
Yikun Jiang;Yuru Pei;Tianmin Xu;Xiaoru Yuan;Hongbin Zha
The deep neural networks combined with the statistical shape model have enabled efficient deformable 2D-3D registration and recovery of 3D anatomical structures from a single radiograph. However, the recovered volumetric image tends to lack the volumetric fidelity of fine-grained anatomical structures and explicit consideration of cross-dimensional semantic correspondence. In this paper, we introduce a simple but effective solution for semantically-consistent deformable 2D-3D registration and detailed volumetric image recovery by inferring a voxel-wise registration field between the cone-beam computed tomography and a single lateral cephalometric radiograph (LC). The key idea is to refine the initial statistical model-based registration field with craniofacial structural details and semantic consistency from the LC. Specifically, our framework employs a self-supervised scheme to learn a voxel-level refiner of registration fields to provide fine-grained craniofacial structural details and volumetric fidelity. We also present a weakly supervised semantic consistency measure for semantic correspondence, relieving the requirements of volumetric image collections and annotations. Experiments showcase that our method achieves deformable 2D-3D registration with performance gains over state-of-the-art registration and radiograph-based volumetric reconstruction methods. The source code is available at https://github.com/Jyk-122/SC-DREG.
{"title":"Toward Semantically-Consistent Deformable 2D-3D Registration for 3D Craniofacial Structure Estimation From a Single-View Lateral Cephalometric Radiograph","authors":"Yikun Jiang;Yuru Pei;Tianmin Xu;Xiaoru Yuan;Hongbin Zha","doi":"10.1109/TMI.2024.3456251","DOIUrl":"10.1109/TMI.2024.3456251","url":null,"abstract":"The deep neural networks combined with the statistical shape model have enabled efficient deformable 2D-3D registration and recovery of 3D anatomical structures from a single radiograph. However, the recovered volumetric image tends to lack the volumetric fidelity of fine-grained anatomical structures and explicit consideration of cross-dimensional semantic correspondence. In this paper, we introduce a simple but effective solution for semantically-consistent deformable 2D-3D registration and detailed volumetric image recovery by inferring a voxel-wise registration field between the cone-beam computed tomography and a single lateral cephalometric radiograph (LC). The key idea is to refine the initial statistical model-based registration field with craniofacial structural details and semantic consistency from the LC. Specifically, our framework employs a self-supervised scheme to learn a voxel-level refiner of registration fields to provide fine-grained craniofacial structural details and volumetric fidelity. We also present a weakly supervised semantic consistency measure for semantic correspondence, relieving the requirements of volumetric image collections and annotations. Experiments showcase that our method achieves deformable 2D-3D registration with performance gains over state-of-the-art registration and radiograph-based volumetric reconstruction methods. The source code is available at <uri>https://github.com/Jyk-122/SC-DREG</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"685-697"},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142160539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Full-Wave Image Reconstruction in Transcranial Photoacoustic Computed Tomography Using a Finite Element Method 使用有限元法重建经颅光声计算机断层扫描的全波图像
Pub Date : 2024-09-09 DOI: 10.1109/TMI.2024.3456595
Yilin Luo;Hsuan-Kai Huang;Karteekeya Sastry;Peng Hu;Xin Tong;Joseph Kuo;Yousuf Aborahama;Shuai Na;Umberto Villa;Mark. A. Anastasio;Lihong V. Wang
Transcranial photoacoustic computed tomography presents challenges in human brain imaging due to skull-induced acoustic aberration. Existing full-wave image reconstruction methods rely on a unified elastic wave equa- tion for skull shear and longitudinal wave propagation, therefore demanding substantial computational resources. We propose an efficient discrete imaging model based on finite element discretization. The elastic wave equation for solids is solely applied to the hard-tissue skull region, while the soft-tissue or coupling-medium region that dominates the simulation domain is modeled with the simpler acoustic wave equation for liquids. The solid-liquid interfaces are explicitly modeled with elastic-acoustic coupling. Furthermore, finite element discretization allows coarser, irregular meshes to conform to object geometry. These factors significantly reduce the linear system size by 20 times to facilitate accurate whole-brain simulations with improved speed. We derive a matched forward-adjoint operator pair based on the model to enable integration with various optimization algorithms. We validate the reconstruction framework through numerical simulations and phantom experiments.
{"title":"Full-Wave Image Reconstruction in Transcranial Photoacoustic Computed Tomography Using a Finite Element Method","authors":"Yilin Luo;Hsuan-Kai Huang;Karteekeya Sastry;Peng Hu;Xin Tong;Joseph Kuo;Yousuf Aborahama;Shuai Na;Umberto Villa;Mark. A. Anastasio;Lihong V. Wang","doi":"10.1109/TMI.2024.3456595","DOIUrl":"10.1109/TMI.2024.3456595","url":null,"abstract":"Transcranial photoacoustic computed tomography presents challenges in human brain imaging due to skull-induced acoustic aberration. Existing full-wave image reconstruction methods rely on a unified elastic wave equa- tion for skull shear and longitudinal wave propagation, therefore demanding substantial computational resources. We propose an efficient discrete imaging model based on finite element discretization. The elastic wave equation for solids is solely applied to the hard-tissue skull region, while the soft-tissue or coupling-medium region that dominates the simulation domain is modeled with the simpler acoustic wave equation for liquids. The solid-liquid interfaces are explicitly modeled with elastic-acoustic coupling. Furthermore, finite element discretization allows coarser, irregular meshes to conform to object geometry. These factors significantly reduce the linear system size by 20 times to facilitate accurate whole-brain simulations with improved speed. We derive a matched forward-adjoint operator pair based on the model to enable integration with various optimization algorithms. We validate the reconstruction framework through numerical simulations and phantom experiments.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"645-655"},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142160540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Navigated 3D Diffusion MRI Using an Optimized CAIPI Sampling and Structured Low-Rank Reconstruction Estimated Navigator 使用优化 CAIPI 采样和结构化低秩重建估计导航器的自导航三维弥散 MRI。
Pub Date : 2024-09-06 DOI: 10.1109/TMI.2024.3454994
Ziyu Li;Karla L. Miller;Xi Chen;Mark Chiew;Wenchuan Wu
3D multi-slab acquisitions are an appealing approach for diffusion MRI because they are compatible with the imaging regime delivering optimal SNR efficiency. In conventional 3D multi-slab imaging, shot-to-shot phase variations caused by motion pose challenges due to the use of multi-shot k-space acquisition. Navigator acquisition after each imaging echo is typically employed to correct phase variations, which prolongs scan time and increases the specific absorption rate (SAR). The aim of this study is to develop a highly efficient, self-navigated method to correct for phase variations in 3D multi-slab diffusion MRI without explicitly acquiring navigators. The sampling of each shot is carefully designed to intersect with the central kz=0 plane of each slab, and the multi-shot sampling is optimized for self-navigation performance while retaining decent reconstruction quality. The kz=0 intersections from all shots are jointly used to reconstruct a 2D phase map for each shot using a structured low-rank constrained reconstruction that leverages the redundancy in shot and coil dimensions. The phase maps are used to eliminate the shot-to-shot phase inconsistency in the final 3D multi-shot reconstruction. We demonstrate the method’s efficacy using retrospective simulations and prospectively acquired in-vivo experiments at 1.22 mm and 1.09 mm isotropic resolutions. Compared to conventional navigated 3D multi-slab imaging, the proposed self-navigated method achieves comparable image quality while shortening the scan time by 31.7% and improving the SNR efficiency by 15.5%. The proposed method produces comparable quality of DTI and white matter tractography to conventional navigated 3D multi-slab acquisition with a much shorter scan time.
三维多片采集是一种极具吸引力的弥散核磁共振成像方法,因为它与可提供最佳信噪比效率的成像机制相兼容。在传统的三维多平板成像中,由于使用多拍 k 空间采集,运动造成的拍间相位变化带来了挑战。通常在每次成像回波后采用导航仪采集来校正相位变化,这会延长扫描时间并增加比吸收率(SAR)。本研究的目的是开发一种高效的自导航方法,用于校正三维多片扩散磁共振成像中的相位变化,而无需明确采集导航器。每个镜头的采样都经过精心设计,以与每个板片的中心 kz=0 平面相交,多镜头采样经过优化,既能实现自导航性能,又能保持较好的重建质量。所有镜头的 kz=0 交点被联合用于重建每个镜头的二维相位图,采用结构化低秩约束重建,充分利用镜头和线圈维度的冗余。相位图用于消除最终三维多镜头重建中镜头间相位的不一致性。我们利用回顾性模拟和在 1.22 毫米和 1.09 毫米各向同性分辨率下进行的前瞻性活体实验证明了该方法的功效。与传统的导航式三维多平板成像相比,所提出的自导航方法可获得相当的图像质量,同时扫描时间缩短了 31.7%,信噪比效率提高了 15.5%。与传统的导航式三维多切片成像相比,该方法能以更短的扫描时间获得质量相当的 DTI 和白质束成像。
{"title":"Self-Navigated 3D Diffusion MRI Using an Optimized CAIPI Sampling and Structured Low-Rank Reconstruction Estimated Navigator","authors":"Ziyu Li;Karla L. Miller;Xi Chen;Mark Chiew;Wenchuan Wu","doi":"10.1109/TMI.2024.3454994","DOIUrl":"10.1109/TMI.2024.3454994","url":null,"abstract":"3D multi-slab acquisitions are an appealing approach for diffusion MRI because they are compatible with the imaging regime delivering optimal SNR efficiency. In conventional 3D multi-slab imaging, shot-to-shot phase variations caused by motion pose challenges due to the use of multi-shot k-space acquisition. Navigator acquisition after each imaging echo is typically employed to correct phase variations, which prolongs scan time and increases the specific absorption rate (SAR). The aim of this study is to develop a highly efficient, self-navigated method to correct for phase variations in 3D multi-slab diffusion MRI without explicitly acquiring navigators. The sampling of each shot is carefully designed to intersect with the central kz=0 plane of each slab, and the multi-shot sampling is optimized for self-navigation performance while retaining decent reconstruction quality. The kz=0 intersections from all shots are jointly used to reconstruct a 2D phase map for each shot using a structured low-rank constrained reconstruction that leverages the redundancy in shot and coil dimensions. The phase maps are used to eliminate the shot-to-shot phase inconsistency in the final 3D multi-shot reconstruction. We demonstrate the method’s efficacy using retrospective simulations and prospectively acquired in-vivo experiments at 1.22 mm and 1.09 mm isotropic resolutions. Compared to conventional navigated 3D multi-slab imaging, the proposed self-navigated method achieves comparable image quality while shortening the scan time by 31.7% and improving the SNR efficiency by 15.5%. The proposed method produces comparable quality of DTI and white matter tractography to conventional navigated 3D multi-slab acquisition with a much shorter scan time.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"632-644"},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on medical imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1