首页 > 最新文献

IEEE transactions on medical imaging最新文献

英文 中文
WeakMedSAM: Weakly-Supervised Medical Image Segmentation via SAM with Sub-Class Exploration and Prompt Affinity Mining.
Pub Date : 2025-03-10 DOI: 10.1109/TMI.2025.3549433
Haoran Wang, Lian Huai, Wenbin Li, Lei Qi, Xingqun Jiang, Yinghuan Shi

We have witnessed remarkable progress in foundation models in vision tasks. Currently, several recent works have utilized the segmenting anything model (SAM) to boost the segmentation performance in medical images, where most of them focus on training an adaptor for fine-tuning a large amount of pixel-wise annotated medical images following a fully supervised manner. In this paper, to reduce the labeling cost, we investigate a novel weakly-supervised SAM-based segmentation model, namely WeakMedSAM. Specifically, our proposed WeakMedSAM contains two modules: 1) to mitigate severe co-occurrence in medical images, a sub-class exploration module is introduced to learn accurate feature representations. 2) to improve the quality of the class activation maps, our prompt affinity mining module utilizes the prompt capability of SAM to obtain an affinity map for random-walk refinement. Our method can be applied to any SAM-like backbone, and we conduct experiments with SAMUS and EfficientSAM. The experimental results on three popularlyused benchmark datasets, i.e., BraTS 2019, AbdomenCT-1K, and MSD Cardiac dataset, show the promising results of our proposed WeakMedSAM. Our code is available at https://github.com/wanghr64/WeakMedSAM.

{"title":"WeakMedSAM: Weakly-Supervised Medical Image Segmentation via SAM with Sub-Class Exploration and Prompt Affinity Mining.","authors":"Haoran Wang, Lian Huai, Wenbin Li, Lei Qi, Xingqun Jiang, Yinghuan Shi","doi":"10.1109/TMI.2025.3549433","DOIUrl":"https://doi.org/10.1109/TMI.2025.3549433","url":null,"abstract":"<p><p>We have witnessed remarkable progress in foundation models in vision tasks. Currently, several recent works have utilized the segmenting anything model (SAM) to boost the segmentation performance in medical images, where most of them focus on training an adaptor for fine-tuning a large amount of pixel-wise annotated medical images following a fully supervised manner. In this paper, to reduce the labeling cost, we investigate a novel weakly-supervised SAM-based segmentation model, namely WeakMedSAM. Specifically, our proposed WeakMedSAM contains two modules: 1) to mitigate severe co-occurrence in medical images, a sub-class exploration module is introduced to learn accurate feature representations. 2) to improve the quality of the class activation maps, our prompt affinity mining module utilizes the prompt capability of SAM to obtain an affinity map for random-walk refinement. Our method can be applied to any SAM-like backbone, and we conduct experiments with SAMUS and EfficientSAM. The experimental results on three popularlyused benchmark datasets, i.e., BraTS 2019, AbdomenCT-1K, and MSD Cardiac dataset, show the promising results of our proposed WeakMedSAM. Our code is available at https://github.com/wanghr64/WeakMedSAM.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Building a Synthetic Vascular Model: Evaluation in an Intracranial Aneurysms Detection Scenario. 建立合成血管模型:在颅内动脉瘤检测场景中进行评估。
Pub Date : 2024-11-06 DOI: 10.1109/TMI.2024.3492313
Rafic Nader, Florent Autrusseau, Vincent L'Allinec, Romain Bourcier

We hereby present a full synthetic model, able to mimic the various constituents of the cerebral vascular tree, including the cerebral arteries, bifurcations and intracranial aneurysms. This model intends to provide a substantial dataset of brain arteries which could be used by a 3D convolutional neural network to efficiently detect Intra-Cranial Aneurysms. The cerebral aneurysms most often occur on a particular structure of the vascular tree named the Circle of Willis. Various studies have been conducted to detect and monitor the aneurysms and those based on Deep Learning achieve the best performance. Specifically, in this work, we propose a full synthetic 3D model able to mimic the brain vasculature as acquired by Magnetic Resonance Angiography, Time Of Flight principle. Among the various MRI modalities, this latter allows for a good rendering of the blood vessels and is non-invasive. Our model has been designed to simultaneously mimic the arteries' geometry, the aneurysm shape, and the background noise. The vascular tree geometry is modeled thanks to an interpolation with 3D Spline functions, and the statistical properties of the background noise is collected from angiography acquisitions and reproduced within the model. In this work, we thoroughly describe the synthetic vasculature model, we build up a neural network designed for aneurysm segmentation and detection, finally, we carry out an in-depth evaluation of the performance gap gained thanks to the synthetic model data augmentation.

我们在此提出一个完整的合成模型,能够模拟脑血管树的各个组成部分,包括脑动脉、分叉和颅内动脉瘤。该模型旨在提供大量脑动脉数据集,三维卷积神经网络可利用这些数据集有效检测颅内动脉瘤。脑动脉瘤最常发生在血管树的一个特殊结构上,即威利斯环。针对动脉瘤的检测和监控已经开展了多项研究,其中基于深度学习的研究取得了最佳效果。具体来说,在这项工作中,我们提出了一个全合成三维模型,该模型能够模仿通过飞行时间原理磁共振血管造影术获取的脑血管结构。在各种核磁共振成像模式中,后者可以很好地渲染血管,而且是非侵入性的。我们设计的模型可同时模拟动脉的几何形状、动脉瘤的形状和背景噪声。血管树的几何形状是通过三维样条函数插值建模的,背景噪声的统计特性是从血管造影采集的数据中收集的,并在模型中再现。在这项工作中,我们详细描述了合成血管模型,建立了一个用于动脉瘤分割和检测的神经网络,最后,我们对合成模型数据增强后的性能差距进行了深入评估。
{"title":"Building a Synthetic Vascular Model: Evaluation in an Intracranial Aneurysms Detection Scenario.","authors":"Rafic Nader, Florent Autrusseau, Vincent L'Allinec, Romain Bourcier","doi":"10.1109/TMI.2024.3492313","DOIUrl":"https://doi.org/10.1109/TMI.2024.3492313","url":null,"abstract":"<p><p>We hereby present a full synthetic model, able to mimic the various constituents of the cerebral vascular tree, including the cerebral arteries, bifurcations and intracranial aneurysms. This model intends to provide a substantial dataset of brain arteries which could be used by a 3D convolutional neural network to efficiently detect Intra-Cranial Aneurysms. The cerebral aneurysms most often occur on a particular structure of the vascular tree named the Circle of Willis. Various studies have been conducted to detect and monitor the aneurysms and those based on Deep Learning achieve the best performance. Specifically, in this work, we propose a full synthetic 3D model able to mimic the brain vasculature as acquired by Magnetic Resonance Angiography, Time Of Flight principle. Among the various MRI modalities, this latter allows for a good rendering of the blood vessels and is non-invasive. Our model has been designed to simultaneously mimic the arteries' geometry, the aneurysm shape, and the background noise. The vascular tree geometry is modeled thanks to an interpolation with 3D Spline functions, and the statistical properties of the background noise is collected from angiography acquisitions and reproduced within the model. In this work, we thoroughly describe the synthetic vasculature model, we build up a neural network designed for aneurysm segmentation and detection, finally, we carry out an in-depth evaluation of the performance gap gained thanks to the synthetic model data augmentation.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142591694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FAMF-Net: Feature Alignment Mutual Attention Fusion with Region Awareness for Breast Cancer Diagnosis via Imbalanced Data. FAMF-Net:通过不平衡数据进行乳腺癌诊断的特征对齐与区域感知相互关注融合。
Pub Date : 2024-11-05 DOI: 10.1109/TMI.2024.3485612
Yiyao Liu, Jinyao Li, Cheng Zhao, Yongtao Zhang, Qian Chen, Jing Qin, Lei Dong, Tianfu Wang, Wei Jiang, Baiying Lei

Automatic and accurate classification of breast cancer in multimodal ultrasound images is crucial to improve patients' diagnosis and treatment effect and save medical resources. Methodologically, the fusion of multimodal ultrasound images often encounters challenges such as misalignment, limited utilization of complementary information, poor interpretability in feature fusion, and imbalances in sample categories. To solve these problems, we propose a feature alignment mutual attention fusion method (FAMF-Net), which consists of a region awareness alignment (RAA) block, a mutual attention fusion (MAF) block, and a reinforcement learning-based dynamic optimization strategy(RDO). Specifically, RAA achieves region awareness through class activation mapping and performs translation transformation to achieve feature alignment. When MAF utilizes a mutual attention mechanism for feature interaction fusion, it mines edge and color features separately in B-mode and shear wave elastography images, enhancing the complementarity of features and improving interpretability. Finally, RDO uses the distribution of samples and prediction probabilities during training as the state of reinforcement learning to dynamically optimize the weights of the loss function, thereby solving the problem of class imbalance. The experimental results based on our clinically obtained dataset demonstrate the effectiveness of the proposed method. Our code will be available at: https://github.com/Magnety/Multi_modal_Image.

自动、准确地对多模态超声图像中的乳腺癌进行分类,对于提高患者的诊断和治疗效果以及节约医疗资源至关重要。在方法学上,多模态超声图像的融合经常会遇到一些挑战,如对齐错误、互补信息利用有限、特征融合的可解释性差、样本类别不平衡等。为了解决这些问题,我们提出了一种特征配准相互关注融合方法(FAMF-Net),它由区域感知配准(RAA)模块、相互关注融合(MAF)模块和基于强化学习的动态优化策略(RDO)组成。具体来说,RAA 通过类激活映射实现区域感知,并执行平移变换以实现特征对齐。当 MAF 利用相互关注机制进行特征交互融合时,它将 B 模式和剪切波弹性成像图像中的边缘特征和颜色特征分别挖掘出来,增强了特征的互补性,提高了可解释性。最后,RDO 将训练过程中的样本分布和预测概率作为强化学习的状态,动态优化损失函数的权重,从而解决了类不平衡的问题。基于临床数据集的实验结果证明了所提方法的有效性。我们的代码可在以下网址获取:https://github.com/Magnety/Multi_modal_Image。
{"title":"FAMF-Net: Feature Alignment Mutual Attention Fusion with Region Awareness for Breast Cancer Diagnosis via Imbalanced Data.","authors":"Yiyao Liu, Jinyao Li, Cheng Zhao, Yongtao Zhang, Qian Chen, Jing Qin, Lei Dong, Tianfu Wang, Wei Jiang, Baiying Lei","doi":"10.1109/TMI.2024.3485612","DOIUrl":"https://doi.org/10.1109/TMI.2024.3485612","url":null,"abstract":"<p><p>Automatic and accurate classification of breast cancer in multimodal ultrasound images is crucial to improve patients' diagnosis and treatment effect and save medical resources. Methodologically, the fusion of multimodal ultrasound images often encounters challenges such as misalignment, limited utilization of complementary information, poor interpretability in feature fusion, and imbalances in sample categories. To solve these problems, we propose a feature alignment mutual attention fusion method (FAMF-Net), which consists of a region awareness alignment (RAA) block, a mutual attention fusion (MAF) block, and a reinforcement learning-based dynamic optimization strategy(RDO). Specifically, RAA achieves region awareness through class activation mapping and performs translation transformation to achieve feature alignment. When MAF utilizes a mutual attention mechanism for feature interaction fusion, it mines edge and color features separately in B-mode and shear wave elastography images, enhancing the complementarity of features and improving interpretability. Finally, RDO uses the distribution of samples and prediction probabilities during training as the state of reinforcement learning to dynamically optimize the weights of the loss function, thereby solving the problem of class imbalance. The experimental results based on our clinically obtained dataset demonstrate the effectiveness of the proposed method. Our code will be available at: https://github.com/Magnety/Multi_modal_Image.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142585413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrections to “Contrastive Graph Pooling for Explainable Classification of Brain Networks” 脑网络可解释分类的对比图集合》的更正。
Pub Date : 2024-11-04 DOI: 10.1109/TMI.2024.3465968
Jiaxing Xu;Qingtian Bian;Xinhang Li;Aihu Zhang;Yiping Ke;Miao Qiao;Wei Zhang;Wei Khang Jeremy Sim;Balázs Gulyás
{"title":"Corrections to “Contrastive Graph Pooling for Explainable Classification of Brain Networks”","authors":"Jiaxing Xu;Qingtian Bian;Xinhang Li;Aihu Zhang;Yiping Ke;Miao Qiao;Wei Zhang;Wei Khang Jeremy Sim;Balázs Gulyás","doi":"10.1109/TMI.2024.3465968","DOIUrl":"10.1109/TMI.2024.3465968","url":null,"abstract":"","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"43 11","pages":"4075-4075"},"PeriodicalIF":0.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10741900","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Center Fetal Brain Tissue Annotation (FeTA) Challenge 2022 Results. 2022 年多中心胎儿脑组织注释(FeTA)挑战赛结果。
Pub Date : 2024-10-30 DOI: 10.1109/TMI.2024.3485554
Kelly Payette, Celine Steger, Roxane Licandro, Priscille De Dumast, Hongwei Bran Li, Matthew Barkovich, Liu Li, Maik Dannecker, Chen Chen, Cheng Ouyang, Niccolo McConnell, Alina Miron, Yongmin Li, Alena Uus, Irina Grigorescu, Paula Ramirez Gilliland, Md Mahfuzur Rahman Siddiquee, Daguang Xu, Andriy Myronenko, Haoyu Wang, Ziyan Huang, Jin Ye, Mireia Alenya, Valentin Comte, Oscar Camara, Jean-Baptiste Masson, Astrid Nilsson, Charlotte Godard, Moona Mazher, Abdul Qayyum, Yibo Gao, Hangqi Zhou, Shangqi Gao, Jia Fu, Guiming Dong, Guotai Wang, ZunHyan Rieu, HyeonSik Yang, Minwoo Lee, Szymon Plotka, Michal K Grzeszczyk, Arkadiusz Sitek, Luisa Vargas Daza, Santiago Usma, Pablo Arbelaez, Wenying Lu, Wenhao Zhang, Jing Liang, Romain Valabregue, Anand A Joshi, Krishna N Nayak, Richard M Leahy, Luca Wilhelmi, Aline Dandliker, Hui Ji, Antonio G Gennari, Anton Jakovcic, Melita Klaic, Ana Adzic, Pavel Markovic, Gracia Grabaric, Gregor Kasprian, Gregor Dovjak, Milan Rados, Lana Vasung, Meritxell Bach Cuadra, Andras Jakab

Segmentation is a critical step in analyzing the developing human fetal brain. There have been vast improvements in automatic segmentation methods in the past several years, and the Fetal Brain Tissue Annotation (FeTA) Challenge 2021 helped to establish an excellent standard of fetal brain segmentation. However, FeTA 2021 was a single center study, limiting real-world clinical applicability and acceptance. The multi-center FeTA Challenge 2022 focused on advancing the generalizability of fetal brain segmentation algorithms for magnetic resonance imaging (MRI). In FeTA 2022, the training dataset contained images and corresponding manually annotated multi-class labels from two imaging centers, and the testing data contained images from these two centers as well as two additional unseen centers. The multi-center data included different MR scanners, imaging parameters, and fetal brain super-resolution algorithms applied. 16 teams participated and 17 algorithms were evaluated. Here, the challenge results are presented, focusing on the generalizability of the submissions. Both in- and out-of-domain, the white matter and ventricles were segmented with the highest accuracy (Top Dice scores: 0.89, 0.87 respectively), while the most challenging structure remains the grey matter (Top Dice score: 0.75) due to anatomical complexity. The top 5 average Dices scores ranged from 0.81-0.82, the top 5 average 95th percentile Hausdorff distance values ranged from 2.3-2.5mm, and the top 5 volumetric similarity scores ranged from 0.90-0.92. The FeTA Challenge 2022 was able to successfully evaluate and advance generalizability of multi-class fetal brain tissue segmentation algorithms for MRI and it continues to benchmark new algorithms.

分割是分析发育中的人类胎儿大脑的关键步骤。过去几年,自动分割方法有了很大改进,2021 年胎儿脑组织注释(FeTA)挑战赛帮助建立了胎儿脑分割的优秀标准。然而,FeTA 2021 是一项单中心研究,限制了实际临床应用和接受程度。多中心 FeTA 2022 挑战赛的重点是提高磁共振成像(MRI)胎儿大脑分割算法的通用性。在 FeTA 2022 中,训练数据集包含来自两个成像中心的图像和相应的人工注释多类标签,测试数据包含来自这两个中心以及另外两个未见中心的图像。多中心数据包括不同的磁共振扫描仪、成像参数和应用的胎儿大脑超分辨率算法。共有 16 个团队参赛,对 17 种算法进行了评估。这里介绍的是挑战赛的结果,重点是提交数据的通用性。无论是域内还是域外,白质和脑室的分割准确率最高(最高骰子得分分别为 0.89 和 0.87),而最具挑战性的结构仍然是灰质(最高骰子得分:0.75),原因是解剖结构复杂。前五名的平均骰子得分介于 0.81-0.82 之间,前五名的平均第 95 百分位数豪斯多夫距离值介于 2.3-2.5 毫米之间,前五名的体积相似性得分介于 0.90-0.92 之间。2022 年 FeTA 挑战赛能够成功评估和推进 MRI 多类胎儿脑组织分割算法的通用性,并继续为新算法设定基准。
{"title":"Multi-Center Fetal Brain Tissue Annotation (FeTA) Challenge 2022 Results.","authors":"Kelly Payette, Celine Steger, Roxane Licandro, Priscille De Dumast, Hongwei Bran Li, Matthew Barkovich, Liu Li, Maik Dannecker, Chen Chen, Cheng Ouyang, Niccolo McConnell, Alina Miron, Yongmin Li, Alena Uus, Irina Grigorescu, Paula Ramirez Gilliland, Md Mahfuzur Rahman Siddiquee, Daguang Xu, Andriy Myronenko, Haoyu Wang, Ziyan Huang, Jin Ye, Mireia Alenya, Valentin Comte, Oscar Camara, Jean-Baptiste Masson, Astrid Nilsson, Charlotte Godard, Moona Mazher, Abdul Qayyum, Yibo Gao, Hangqi Zhou, Shangqi Gao, Jia Fu, Guiming Dong, Guotai Wang, ZunHyan Rieu, HyeonSik Yang, Minwoo Lee, Szymon Plotka, Michal K Grzeszczyk, Arkadiusz Sitek, Luisa Vargas Daza, Santiago Usma, Pablo Arbelaez, Wenying Lu, Wenhao Zhang, Jing Liang, Romain Valabregue, Anand A Joshi, Krishna N Nayak, Richard M Leahy, Luca Wilhelmi, Aline Dandliker, Hui Ji, Antonio G Gennari, Anton Jakovcic, Melita Klaic, Ana Adzic, Pavel Markovic, Gracia Grabaric, Gregor Kasprian, Gregor Dovjak, Milan Rados, Lana Vasung, Meritxell Bach Cuadra, Andras Jakab","doi":"10.1109/TMI.2024.3485554","DOIUrl":"https://doi.org/10.1109/TMI.2024.3485554","url":null,"abstract":"<p><p>Segmentation is a critical step in analyzing the developing human fetal brain. There have been vast improvements in automatic segmentation methods in the past several years, and the Fetal Brain Tissue Annotation (FeTA) Challenge 2021 helped to establish an excellent standard of fetal brain segmentation. However, FeTA 2021 was a single center study, limiting real-world clinical applicability and acceptance. The multi-center FeTA Challenge 2022 focused on advancing the generalizability of fetal brain segmentation algorithms for magnetic resonance imaging (MRI). In FeTA 2022, the training dataset contained images and corresponding manually annotated multi-class labels from two imaging centers, and the testing data contained images from these two centers as well as two additional unseen centers. The multi-center data included different MR scanners, imaging parameters, and fetal brain super-resolution algorithms applied. 16 teams participated and 17 algorithms were evaluated. Here, the challenge results are presented, focusing on the generalizability of the submissions. Both in- and out-of-domain, the white matter and ventricles were segmented with the highest accuracy (Top Dice scores: 0.89, 0.87 respectively), while the most challenging structure remains the grey matter (Top Dice score: 0.75) due to anatomical complexity. The top 5 average Dices scores ranged from 0.81-0.82, the top 5 average 95<sup>th</sup> percentile Hausdorff distance values ranged from 2.3-2.5mm, and the top 5 volumetric similarity scores ranged from 0.90-0.92. The FeTA Challenge 2022 was able to successfully evaluate and advance generalizability of multi-class fetal brain tissue segmentation algorithms for MRI and it continues to benchmark new algorithms.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142549774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CQformer: Learning Dynamics Across Slices in Medical Image Segmentation CQformer:医学图像分割中的跨切片动态学习
Pub Date : 2024-10-10 DOI: 10.1109/TMI.2024.3477555
Shengjie Zhang;Xin Shen;Xiang Chen;Ziqi Yu;Bohan Ren;Haibo Yang;Xiao-Yong Zhang;Yuan Zhou
Prevalent studies on deep learning-based 3D medical image segmentation capture the continuous variation across 2D slices mainly via convolution, Transformer, inter-slice interaction, and time series models. In this work, via modeling this variation by an ordinary differential equation (ODE), we propose a cross instance query-guided Transformer architecture (CQformer) that leverages features from preceding slices to improve the segmentation performance of subsequent slices. Its key components include a cross-attention mechanism in an ODE formulation, which bridges the features of contiguous 2D slices of the 3D volumetric data. In addition, a regression head is employed to shorten the gap between the bottleneck and the prediction layer. Extensive experiments on 7 datasets with various modalities (CT, MRI) and tasks (organ, tissue, and lesion) demonstrate that CQformer outperforms previous state-of-the-art segmentation algorithms on 6 datasets by 0.44%–2.45%, and achieves the second highest performance of 88.30% on the BTCV dataset. The code is available at https://github.com/qbmizsj/CQformer.
基于深度学习的三维医学图像分割研究主要通过卷积、变换器、切片间交互和时间序列模型来捕捉二维切片间的连续变化。在这项工作中,通过用常微分方程(ODE)对这种变化进行建模,我们提出了一种跨实例查询引导的变换器架构(CQformer),它能利用前面切片的特征来提高后续切片的分割性能。其关键组件包括 ODE 公式中的交叉注意机制,该机制将三维容积数据中连续二维切片的特征连接起来。此外,还采用了回归头来缩短瓶颈层和预测层之间的差距。在不同模式(CT、MRI)和任务(器官、组织和病变)的 7 个数据集上进行的广泛实验表明,CQformer 在 6 个数据集上的表现比以前的一流分割算法高出 0.44%-2.45% ,在 BTCV 数据集上的表现为 88.30%,位居第二。代码将在通过验收后公开发布。
{"title":"CQformer: Learning Dynamics Across Slices in Medical Image Segmentation","authors":"Shengjie Zhang;Xin Shen;Xiang Chen;Ziqi Yu;Bohan Ren;Haibo Yang;Xiao-Yong Zhang;Yuan Zhou","doi":"10.1109/TMI.2024.3477555","DOIUrl":"10.1109/TMI.2024.3477555","url":null,"abstract":"Prevalent studies on deep learning-based 3D medical image segmentation capture the continuous variation across 2D slices mainly via convolution, Transformer, inter-slice interaction, and time series models. In this work, via modeling this variation by an ordinary differential equation (ODE), we propose a cross instance query-guided Transformer architecture (CQformer) that leverages features from preceding slices to improve the segmentation performance of subsequent slices. Its key components include a cross-attention mechanism in an ODE formulation, which bridges the features of contiguous 2D slices of the 3D volumetric data. In addition, a regression head is employed to shorten the gap between the bottleneck and the prediction layer. Extensive experiments on 7 datasets with various modalities (CT, MRI) and tasks (organ, tissue, and lesion) demonstrate that CQformer outperforms previous state-of-the-art segmentation algorithms on 6 datasets by 0.44%–2.45%, and achieves the second highest performance of 88.30% on the BTCV dataset. The code is available at <uri>https://github.com/qbmizsj/CQformer</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"1043-1057"},"PeriodicalIF":0.0,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142402485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-Invasive Deep-Brain Imaging With 3D Integrated Photoacoustic Tomography and Ultrasound Localization Microscopy (3D-PAULM) 利用三维集成光声层析成像和超声定位显微镜(3D-PAULM)进行无创深部脑成像。
Pub Date : 2024-10-09 DOI: 10.1109/TMI.2024.3477317
Yuqi Tang;Nanchao Wang;Zhijie Dong;Matthew Lowerison;Angela del Aguila;Natalie Johnston;Tri Vu;Chenshuo Ma;Yirui Xu;Wei Yang;Pengfei Song;Junjie Yao
Photoacoustic computed tomography (PACT) is a proven technology for imaging hemodynamics in deep brain of small animal models. PACT is inherently compatible with ultrasound (US) imaging, providing complementary contrast mechanisms. While PACT can quantify the brain’s oxygen saturation of hemoglobin (sO $_{{2}}text {)}$ , US imaging can probe the blood flow based on the Doppler effect. Further, by tracking gas-filled microbubbles, ultrasound localization microscopy (ULM) can map the blood flow velocity with sub-diffraction spatial resolution. In this work, we present a 3D deep-brain imaging system that seamlessly integrates PACT and ULM into a single device, 3D-PAULM. Using a low ultrasound frequency of 4 MHz, 3D-PAULM is capable of imaging the brain hemodynamic functions with intact scalp and skull in a totally non-invasive manner. Using 3D-PAULM, we studied the mouse brain functions with ischemic stroke. Multi-spectral PACT, US B-mode imaging, microbubble-enhanced power Doppler (PD), and ULM were performed on the same mouse brain with intrinsic image co-registration. From the multi-modality measurements, we further quantified blood perfusion, sO2, vessel density, and flow velocity of the mouse brain, showing stroke-induced ischemia, hypoxia, and reduced blood flow. We expect that 3D-PAULM can find broad applications in studying deep brain functions on small animal models.
光声计算机断层扫描(PACT)是一种成熟的小动物模型脑深部血流动力学成像技术。光声计算机断层扫描与超声(US)成像具有内在兼容性,可提供互补的对比机制。PACT 可量化大脑血红蛋白的氧饱和度(sO2),而 US 成像可根据多普勒效应探测血流。此外,通过跟踪充满气体的微气泡,超声定位显微镜(ULM)能以亚衍射空间分辨率绘制血流速度图。在这项研究中,我们提出了一种三维脑深部成像系统,它将 PACT 和 ULM 无缝集成到一个设备中,即 3D-PAULM 系统。3D-PAULM 使用 4 MHz 的低超声频率,能够以完全无创的方式在头皮和头骨完好的情况下对大脑血流动力学功能进行成像。我们利用 3D-PAULM 研究了缺血性中风小鼠的脑功能。我们在同一只小鼠脑部进行了多光谱 PACT、美国 B 型成像、微泡增强功率多普勒(PD)和超低功耗成像,并进行了内在图像协同配准。通过多模态测量,我们进一步量化了小鼠大脑的血液灌注、血氧饱和度、血管密度和血流速度,显示了中风引起的缺血、缺氧和血流减少。我们期待 3D-PAULM 能在小动物模型的脑深部功能研究中得到广泛应用。
{"title":"Non-Invasive Deep-Brain Imaging With 3D Integrated Photoacoustic Tomography and Ultrasound Localization Microscopy (3D-PAULM)","authors":"Yuqi Tang;Nanchao Wang;Zhijie Dong;Matthew Lowerison;Angela del Aguila;Natalie Johnston;Tri Vu;Chenshuo Ma;Yirui Xu;Wei Yang;Pengfei Song;Junjie Yao","doi":"10.1109/TMI.2024.3477317","DOIUrl":"10.1109/TMI.2024.3477317","url":null,"abstract":"Photoacoustic computed tomography (PACT) is a proven technology for imaging hemodynamics in deep brain of small animal models. PACT is inherently compatible with ultrasound (US) imaging, providing complementary contrast mechanisms. While PACT can quantify the brain’s oxygen saturation of hemoglobin (sO<inline-formula> <tex-math>$_{{2}}text {)}$ </tex-math></inline-formula>, US imaging can probe the blood flow based on the Doppler effect. Further, by tracking gas-filled microbubbles, ultrasound localization microscopy (ULM) can map the blood flow velocity with sub-diffraction spatial resolution. In this work, we present a 3D deep-brain imaging system that seamlessly integrates PACT and ULM into a single device, 3D-PAULM. Using a low ultrasound frequency of 4 MHz, 3D-PAULM is capable of imaging the brain hemodynamic functions with intact scalp and skull in a totally non-invasive manner. Using 3D-PAULM, we studied the mouse brain functions with ischemic stroke. Multi-spectral PACT, US B-mode imaging, microbubble-enhanced power Doppler (PD), and ULM were performed on the same mouse brain with intrinsic image co-registration. From the multi-modality measurements, we further quantified blood perfusion, sO2, vessel density, and flow velocity of the mouse brain, showing stroke-induced ischemia, hypoxia, and reduced blood flow. We expect that 3D-PAULM can find broad applications in studying deep brain functions on small animal models.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"994-1004"},"PeriodicalIF":0.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142396305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GlandSAM: Injecting Morphology Knowledge Into Segment Anything Model for Label-Free Gland Segmentation GlandSAM:为无标签腺体分段模型注入形态学知识
Pub Date : 2024-10-08 DOI: 10.1109/TMI.2024.3476176
Qixiang Zhang;Yi Li;Cheng Xue;Haonan Wang;Xiaomeng Li
This paper presents a label-free gland segmentation, GlandSAM, which achieves comparable performance with supervised methods while no label is required during its training or inference phase. We observe that the Segment Anything model produces sub-optimal results on gland dataset: It either over-segments a gland into many fractions or under-segments the gland regions by confusing many of them with the background, due to the complex morphology of glands and lack of sufficient labels. To address this challenge, our GlandSAM innovatively injects two clues about gland morphology into SAM to guide the segmentation process: (1) Heterogeneity within glands and (2) Similarity with the background. Initially, we leverage the clues to decompose the intricate glands by selectively extracting a proposal for each gland sub-region of heterogeneous appearances. Then, we inject the morphology clues into SAM in a fine-tuning manner with a novel morphology-aware semantic grouping module that explicitly groups the high-level semantics of gland sub-regions. In this way, our GlandSAM could capture comprehensive knowledge about gland morphology, and produce well-delineated and complete segmentation results. Extensive experiments conducted on the GlaS dataset and the CRAG dataset reveal that GlandSAM outperforms state-of-the-art label-free methods by a significant margin. Notably, our GlandSAM even surpasses several fully-supervised methods that require pixel-wise labels for training, which highlights the remarkable performance and potential of GlandSAM in the realm of gland segmentation.
{"title":"GlandSAM: Injecting Morphology Knowledge Into Segment Anything Model for Label-Free Gland Segmentation","authors":"Qixiang Zhang;Yi Li;Cheng Xue;Haonan Wang;Xiaomeng Li","doi":"10.1109/TMI.2024.3476176","DOIUrl":"10.1109/TMI.2024.3476176","url":null,"abstract":"This paper presents a label-free gland segmentation, GlandSAM, which achieves comparable performance with supervised methods while no label is required during its training or inference phase. We observe that the Segment Anything model produces sub-optimal results on gland dataset: It either over-segments a gland into many fractions or under-segments the gland regions by confusing many of them with the background, due to the complex morphology of glands and lack of sufficient labels. To address this challenge, our GlandSAM innovatively injects two clues about gland morphology into SAM to guide the segmentation process: (1) Heterogeneity within glands and (2) Similarity with the background. Initially, we leverage the clues to decompose the intricate glands by selectively extracting a proposal for each gland sub-region of heterogeneous appearances. Then, we inject the morphology clues into SAM in a fine-tuning manner with a novel morphology-aware semantic grouping module that explicitly groups the high-level semantics of gland sub-regions. In this way, our GlandSAM could capture comprehensive knowledge about gland morphology, and produce well-delineated and complete segmentation results. Extensive experiments conducted on the GlaS dataset and the CRAG dataset reveal that GlandSAM outperforms state-of-the-art label-free methods by a significant margin. Notably, our GlandSAM even surpasses several fully-supervised methods that require pixel-wise labels for training, which highlights the remarkable performance and potential of GlandSAM in the realm of gland segmentation.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"1070-1082"},"PeriodicalIF":0.0,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142385483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unleash the Power of State Space Model for Whole Slide Image With Local Aware Scanning and Importance Resampling 利用局部感知扫描和重要性重采样,释放状态空间模型对整个幻灯片图像的处理能力
Pub Date : 2024-10-07 DOI: 10.1109/TMI.2024.3475587
Yanyan Huang;Weiqin Zhao;Yu Fu;Lingting Zhu;Lequan Yu
Whole slide image (WSI) analysis is gaining prominence within the medical imaging field. However, previous methods often fall short of efficiently processing entire WSIs due to their gigapixel size. Inspired by recent developments in state space models, this paper introduces a new Pathology Mamba (PAM) for more accurate and robust WSI analysis. PAM includes three carefully designed components to tackle the challenges of enormous image size, the utilization of local and hierarchical information, and the mismatch between the feature distributions of training and testing during WSI analysis. Specifically, we design a Bi-directional Mamba Encoder to process the extensive patches present in WSIs effectively and efficiently, which can handle large-scale pathological images while achieving high performance and accuracy. To further harness the local information and inherent hierarchical structure of WSI, we introduce a novel Local-aware Scanning module, which employs a local-aware mechanism alongside hierarchical scanning to adeptly capture both the local information and the overarching structure within WSIs. Moreover, to alleviate the patch feature distribution misalignment between training and testing, we propose a Test-time Importance Resampling module to conduct testing patch resampling to ensure consistency of feature distribution between the training and testing phases, and thus enhance model prediction. Extensive evaluation on nine WSI datasets with cancer subtyping and survival prediction tasks demonstrates that PAM outperforms current state-of-the-art methods and also its enhanced capability in modeling discriminative areas within WSIs. The source code is available at https://github.com/HKU-MedAI/PAM.
{"title":"Unleash the Power of State Space Model for Whole Slide Image With Local Aware Scanning and Importance Resampling","authors":"Yanyan Huang;Weiqin Zhao;Yu Fu;Lingting Zhu;Lequan Yu","doi":"10.1109/TMI.2024.3475587","DOIUrl":"10.1109/TMI.2024.3475587","url":null,"abstract":"Whole slide image (WSI) analysis is gaining prominence within the medical imaging field. However, previous methods often fall short of efficiently processing entire WSIs due to their gigapixel size. Inspired by recent developments in state space models, this paper introduces a new Pathology Mamba (PAM) for more accurate and robust WSI analysis. PAM includes three carefully designed components to tackle the challenges of enormous image size, the utilization of local and hierarchical information, and the mismatch between the feature distributions of training and testing during WSI analysis. Specifically, we design a Bi-directional Mamba Encoder to process the extensive patches present in WSIs effectively and efficiently, which can handle large-scale pathological images while achieving high performance and accuracy. To further harness the local information and inherent hierarchical structure of WSI, we introduce a novel Local-aware Scanning module, which employs a local-aware mechanism alongside hierarchical scanning to adeptly capture both the local information and the overarching structure within WSIs. Moreover, to alleviate the patch feature distribution misalignment between training and testing, we propose a Test-time Importance Resampling module to conduct testing patch resampling to ensure consistency of feature distribution between the training and testing phases, and thus enhance model prediction. Extensive evaluation on nine WSI datasets with cancer subtyping and survival prediction tasks demonstrates that PAM outperforms current state-of-the-art methods and also its enhanced capability in modeling discriminative areas within WSIs. The source code is available at <uri>https://github.com/HKU-MedAI/PAM</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"1032-1042"},"PeriodicalIF":0.0,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142384456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Morphology-Based Non-Rigid Registration of Coronary Computed Tomography and Intravascular Images Through Virtual Catheter Path Optimization 通过虚拟导管路径优化实现冠状动脉计算机断层扫描和血管内图像的基于形态学的非刚性配准
Pub Date : 2024-10-07 DOI: 10.1109/TMI.2024.3474053
Karim Kadry;Max L. Olender;Andreas Schuh;Abhishek Karmakar;Kersten Petersen;Michiel Schaap;David Marlevi;Adam UpdePac;Takuya Mizukami;Charles Taylor;Elazer R. Edelman;Farhad R. Nezami
Coronary computed tomography angiography (CCTA) provides 3D information on obstructive coronary artery disease, but cannot fully visualize high-resolution features within the vessel wall. Intravascular imaging, in contrast, can spatially resolve atherosclerotic in cross sectional slices, but is limited in capturing 3D relationships between each slice. Co-registering CCTA and intravascular images enables a variety of clinical research applications but is time consuming and user-dependent. This is due to intravascular images suffering from non-rigid distortions arising from irregularities in the imaging catheter path. To address these issues, we present a morphology-based framework for the rigid and non-rigid matching of intravascular images to CCTA images. To do this, we find the optimal virtual catheter path that samples the coronary artery in CCTA image space to recapitulate the coronary artery morphology observed in the intravascular image. We validate our framework on a multi-center cohort of 40 patients using bifurcation landmarks as ground truth for longitudinal and rotational registration. Our registration approach significantly outperforms other approaches for bifurcation alignment. By providing a differentiable framework for multi-modal vascular co-registration, our framework reduces the manual effort required to conduct large-scale multi-modal clinical studies and enables the development of machine learning-based co-registration approaches.
{"title":"Morphology-Based Non-Rigid Registration of Coronary Computed Tomography and Intravascular Images Through Virtual Catheter Path Optimization","authors":"Karim Kadry;Max L. Olender;Andreas Schuh;Abhishek Karmakar;Kersten Petersen;Michiel Schaap;David Marlevi;Adam UpdePac;Takuya Mizukami;Charles Taylor;Elazer R. Edelman;Farhad R. Nezami","doi":"10.1109/TMI.2024.3474053","DOIUrl":"10.1109/TMI.2024.3474053","url":null,"abstract":"Coronary computed tomography angiography (CCTA) provides 3D information on obstructive coronary artery disease, but cannot fully visualize high-resolution features within the vessel wall. Intravascular imaging, in contrast, can spatially resolve atherosclerotic in cross sectional slices, but is limited in capturing 3D relationships between each slice. Co-registering CCTA and intravascular images enables a variety of clinical research applications but is time consuming and user-dependent. This is due to intravascular images suffering from non-rigid distortions arising from irregularities in the imaging catheter path. To address these issues, we present a morphology-based framework for the rigid and non-rigid matching of intravascular images to CCTA images. To do this, we find the optimal virtual catheter path that samples the coronary artery in CCTA image space to recapitulate the coronary artery morphology observed in the intravascular image. We validate our framework on a multi-center cohort of 40 patients using bifurcation landmarks as ground truth for longitudinal and rotational registration. Our registration approach significantly outperforms other approaches for bifurcation alignment. By providing a differentiable framework for multi-modal vascular co-registration, our framework reduces the manual effort required to conduct large-scale multi-modal clinical studies and enables the development of machine learning-based co-registration approaches.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"880-890"},"PeriodicalIF":0.0,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142384455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on medical imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1