首页 > 最新文献

Medical image analysis最新文献

英文 中文
SurgLaVi: Large-Scale Hierarchical Dataset for Surgical Vision–Language Representation Learning SurgLaVi:外科视觉语言表示学习的大规模分层数据集
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-11 DOI: 10.1016/j.media.2026.103982
Alejandra Perez, Chinedu Nwoye, Ramtin Raji Kermani, Omid Mohareri, Muhammad Abdullah Jamal
{"title":"SurgLaVi: Large-Scale Hierarchical Dataset for Surgical Vision–Language Representation Learning","authors":"Alejandra Perez, Chinedu Nwoye, Ramtin Raji Kermani, Omid Mohareri, Muhammad Abdullah Jamal","doi":"10.1016/j.media.2026.103982","DOIUrl":"https://doi.org/10.1016/j.media.2026.103982","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"95 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146152677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artifact-suppressed 3D retinal microvascular segmentation via multi-scale topology regulation 基于多尺度拓扑调节的伪影抑制视网膜三维微血管分割
IF 11.8 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-10 DOI: 10.1016/j.media.2026.103988
Ting Luo , Jinxian Zhang , Tao Chen , Zhouyan He , Yanda Meng , Mengting Liu , Jiong Zhang , Dan Zhang
Optical coherence tomography angiography (OCTA) enables non-invasive visualization of retinal microvasculature, and accurate 3D vessel segmentation is essential for quantifying biomarkers critical for early diagnosis and monitoring of diabetic retinopathy. However, reliable 3D OCTA segmentation is hindered by capillary invisibility, complex vascular topology, and motion artifacts, which compromise biomarker accuracy. Furthermore, the scarcity of manually annotated 3D OCTA microvascular data constrains methodological development. To address this challenge, we introduce our publicly accessible 3D microvascular dataset and propose MT-Net, a multi-view, topology-aware 3D retinal microvascular segmentation network. First, a novel dimension transformation strategy is employed to enhance topological accuracy by effectively encoding spatial dependencies across multiple planes. Second, to mitigate the impact of motion artifacts, we introduce a unidirectional Artifact Suppression Module (ASM) that selectively suppresses noise along the B-scan direction. Third, a Twin-Cross Attention Module (TCAM), guided by vessel centerlines, is designed to enhance the continuity and completeness of segmented vessels by reinforcing cross-view contextual information. Experiments on two 3D OCTA datasets show that MT-Net achieves state-of-the-art accuracy and topological consistency, with strong generalizability validated by cross-dataset analysis. We plan to release our manual annotations to facilitate future research in retinal OCTA segmentation.
光学相干断层扫描血管造影(OCTA)可以实现视网膜微血管的无创可视化,准确的3D血管分割对于量化糖尿病视网膜病变早期诊断和监测的生物标志物至关重要。然而,可靠的3D OCTA分割受到毛细管不可见性、复杂的血管拓扑和运动伪影的阻碍,这些都会影响生物标志物的准确性。此外,手工注释的3D OCTA微血管数据的缺乏限制了方法的发展。为了解决这一挑战,我们引入了可公开访问的3D微血管数据集,并提出了MT-Net,一个多视图、拓扑感知的3D视网膜微血管分割网络。首先,采用一种新颖的维变换策略,通过对多平面的空间依赖进行有效编码来提高拓扑精度。其次,为了减轻运动伪影的影响,我们引入了一个单向伪影抑制模块(ASM),它可以选择性地抑制沿b扫描方向的噪声。第三,设计以血管中心线为导向的双交叉注意模块(TCAM),通过强化交叉视图上下文信息来增强分割血管的连续性和完整性。在两个三维OCTA数据集上进行的实验表明,MT-Net达到了最先进的精度和拓扑一致性,并通过跨数据集分析验证了其强大的泛化能力。我们计划发布我们的手工注释,以促进视网膜OCTA分割的未来研究。
{"title":"Artifact-suppressed 3D retinal microvascular segmentation via multi-scale topology regulation","authors":"Ting Luo ,&nbsp;Jinxian Zhang ,&nbsp;Tao Chen ,&nbsp;Zhouyan He ,&nbsp;Yanda Meng ,&nbsp;Mengting Liu ,&nbsp;Jiong Zhang ,&nbsp;Dan Zhang","doi":"10.1016/j.media.2026.103988","DOIUrl":"10.1016/j.media.2026.103988","url":null,"abstract":"<div><div>Optical coherence tomography angiography (OCTA) enables non-invasive visualization of retinal microvasculature, and accurate 3D vessel segmentation is essential for quantifying biomarkers critical for early diagnosis and monitoring of diabetic retinopathy. However, reliable 3D OCTA segmentation is hindered by capillary invisibility, complex vascular topology, and motion artifacts, which compromise biomarker accuracy. Furthermore, the scarcity of manually annotated 3D OCTA microvascular data constrains methodological development. To address this challenge, we introduce our publicly accessible 3D microvascular dataset and propose MT-Net, a multi-view, topology-aware 3D retinal microvascular segmentation network. First, a novel dimension transformation strategy is employed to enhance topological accuracy by effectively encoding spatial dependencies across multiple planes. Second, to mitigate the impact of motion artifacts, we introduce a unidirectional Artifact Suppression Module (ASM) that selectively suppresses noise along the B-scan direction. Third, a Twin-Cross Attention Module (TCAM), guided by vessel centerlines, is designed to enhance the continuity and completeness of segmented vessels by reinforcing cross-view contextual information. Experiments on two 3D OCTA datasets show that MT-Net achieves state-of-the-art accuracy and topological consistency, with strong generalizability validated by cross-dataset analysis. We plan to release our manual annotations to facilitate future research in retinal OCTA segmentation.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103988"},"PeriodicalIF":11.8,"publicationDate":"2026-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146146686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quality-label-free Fetal Brain MRI Quality Control Based on Image Orientation Recognition Uncertainty 基于图像方向识别不确定性的无质量标签胎儿脑MRI质量控制
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-10 DOI: 10.1016/j.media.2026.103994
Mingxuan Liu, Yi Liao, Haoxiang Li, Juncheng Zhu, Hongjia Yang, Yingqi Hao, Haibo Qu, Qiyuan Tian
{"title":"Quality-label-free Fetal Brain MRI Quality Control Based on Image Orientation Recognition Uncertainty","authors":"Mingxuan Liu, Yi Liao, Haoxiang Li, Juncheng Zhu, Hongjia Yang, Yingqi Hao, Haibo Qu, Qiyuan Tian","doi":"10.1016/j.media.2026.103994","DOIUrl":"https://doi.org/10.1016/j.media.2026.103994","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"92 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146152678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MICCAI STS 2024 Challenge: Semi-Supervised Instance-Level Tooth Segmentation in Panoramic X-ray and CBCT Images MICCAI STS 2024挑战:全景x射线和CBCT图像的半监督实例级牙齿分割
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-09 DOI: 10.1016/j.media.2026.103986
Yaqi Wang, Zhi Li, Chengyu Wu, Jun Liu, Yifan Zhang, Jiaxue Ni, Qian Luo, Jialuo Chen, Hongyuan Zhang, Jin Liu, Can Han, Kaiwen Fu, Changkai Ji, Xinxu Cai, Jing Hao, Zhihao Zheng, Shi Xu, Junqiang Chen, Xiaoyang Yu, Qianni Zhang, Dahong Qian, Shuai Wang, Huiyu Zhou
Orthopantomogram (OPGs) and Cone-Beam Computed Tomography (CBCT) are vital for dentistry, but creating large datasets for automated tooth segmentation is hindered by the labor-intensive process of manual instance-level annotation. This research aimed to benchmark and advance semi-supervised learning (SSL) as a solution for this data scarcity problem. We organized the 2nd Semi-supervised Teeth Segmentation (STS 2024) Challenge at MICCAI 2024. We provided a large-scale dataset comprising over 90,000 2D images and 3D axial slices, which includes 2,380 OPG images and 330 CBCT scans, all featuring detailed instance-level FDI annotations on part of the data. The challenge attracted 114 (OPG) and 106 (CBCT) registered teams. To ensure algorithmic excellence and full transparency, we rigorously evaluated the valid, open-source submissions from the top 10 (OPG) and top 5 (CBCT) teams, respectively. All successful submissions were deep learning-based SSL methods. The winning semi-supervised models demonstrated impressive performance gains over a fully-supervised nnU-Net baseline trained only on the labeled data. For the 2D OPG track, the top method improved the Instance Affinity (IA) score by over 44 percentage points. For the 3D CBCT track, the winning approach boosted the Instance Dice score by 61 percentage points. This challenge demonstrates the potential benefit benefit of SSL for complex, instance-level medical image segmentation tasks where labeled data is scarce. The most effective approaches consistently leveraged hybrid semi-supervised frameworks that combined knowledge from foundational models like SAM with multi-stage, coarse-to-fine refinement pipelines. Both the challenge dataset and the participants’ submitted code have been made publicly available on GitHub (https://github.com/ricoleehduu/STS-Challenge-2024), ensuring transparency and reproducibility.
骨科断层扫描(OPGs)和锥形束计算机断层扫描(CBCT)对牙科至关重要,但创建用于自动牙齿分割的大型数据集受到人工实例级注释的劳动密集型过程的阻碍。本研究旨在对半监督学习(SSL)进行基准测试和推进,以此作为数据稀缺问题的解决方案。我们在MICCAI 2024举办了第二届半监督牙齿分割挑战赛(STS 2024)。我们提供了一个大型数据集,包括超过90,000张2D图像和3D轴向切片,其中包括2,380张OPG图像和330张CBCT扫描,所有数据都具有部分数据的详细实例级FDI注释。该挑战吸引了114个(OPG)和106个(CBCT)注册团队。为了确保算法的卓越性和充分的透明度,我们分别严格评估了来自前10名(OPG)和前5名(CBCT)团队的有效开源提交。所有成功提交的都是基于深度学习的SSL方法。获胜的半监督模型比仅在标记数据上训练的完全监督的nnU-Net基线表现出令人印象深刻的性能提升。对于2D OPG轨道,顶部方法将实例亲和力(IA)分数提高了44个百分点以上。对于3D CBCT赛道,获胜方法将Instance Dice得分提高了61个百分点。这一挑战显示了SSL对于标记数据稀缺的复杂的实例级医学图像分割任务的潜在好处。最有效的方法一直是利用混合半监督框架,将来自SAM等基础模型的知识与多级、从粗到细的细化管道相结合。挑战数据集和参与者提交的代码都已在GitHub (https://github.com/ricoleehduu/STS-Challenge-2024)上公开提供,以确保透明度和可重复性。
{"title":"MICCAI STS 2024 Challenge: Semi-Supervised Instance-Level Tooth Segmentation in Panoramic X-ray and CBCT Images","authors":"Yaqi Wang, Zhi Li, Chengyu Wu, Jun Liu, Yifan Zhang, Jiaxue Ni, Qian Luo, Jialuo Chen, Hongyuan Zhang, Jin Liu, Can Han, Kaiwen Fu, Changkai Ji, Xinxu Cai, Jing Hao, Zhihao Zheng, Shi Xu, Junqiang Chen, Xiaoyang Yu, Qianni Zhang, Dahong Qian, Shuai Wang, Huiyu Zhou","doi":"10.1016/j.media.2026.103986","DOIUrl":"https://doi.org/10.1016/j.media.2026.103986","url":null,"abstract":"Orthopantomogram (OPGs) and Cone-Beam Computed Tomography (CBCT) are vital for dentistry, but creating large datasets for automated tooth segmentation is hindered by the labor-intensive process of manual instance-level annotation. This research aimed to benchmark and advance semi-supervised learning (SSL) as a solution for this data scarcity problem. We organized the 2nd Semi-supervised Teeth Segmentation (STS 2024) Challenge at MICCAI 2024. We provided a large-scale dataset comprising over 90,000 2D images and 3D axial slices, which includes 2,380 OPG images and 330 CBCT scans, all featuring detailed instance-level FDI annotations on part of the data. The challenge attracted 114 (OPG) and 106 (CBCT) registered teams. To ensure algorithmic excellence and full transparency, we rigorously evaluated the valid, open-source submissions from the top 10 (OPG) and top 5 (CBCT) teams, respectively. All successful submissions were deep learning-based SSL methods. The winning semi-supervised models demonstrated impressive performance gains over a fully-supervised nnU-Net baseline trained only on the labeled data. For the 2D OPG track, the top method improved the Instance Affinity (IA) score by over 44 percentage points. For the 3D CBCT track, the winning approach boosted the Instance Dice score by 61 percentage points. This challenge demonstrates the potential benefit benefit of SSL for complex, instance-level medical image segmentation tasks where labeled data is scarce. The most effective approaches consistently leveraged hybrid semi-supervised frameworks that combined knowledge from foundational models like SAM with multi-stage, coarse-to-fine refinement pipelines. Both the challenge dataset and the participants’ submitted code have been made publicly available on GitHub (<ce:inter-ref xlink:href=\"https://github.com/ricoleehduu/STS-Challenge-2024\" xlink:type=\"simple\">https://github.com/ricoleehduu/STS-Challenge-2024</ce:inter-ref>), ensuring transparency and reproducibility.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"108 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146146639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient, scalable, and adaptable plug-and-play temporal attention module for motion-guided cardiac segmentation with sparse temporal labels 一种高效,可扩展,适应性强的即插即用时间注意模块,用于运动引导心脏分割稀疏时间标签
IF 11.8 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-09 DOI: 10.1016/j.media.2026.103981
Md Kamrul Hasan , Guang Yang , Choon Hwai Yap
Cardiac anatomy segmentation is essential for clinical assessment of cardiac function and disease diagnosis to inform treatment and intervention. Deep learning (DL) has improved cardiac anatomy segmentation accuracy, especially when information on cardiac motion dynamics is integrated into the networks. Several methods for incorporating motion information have been proposed; however, existing methods are not yet optimal: adding the time dimension to input data causes high computational costs, and incorporating registration into the segmentation network remains computationally costly and can be affected by errors of registration, especially with non-DL registration. While attention-based motion modeling is promising, suboptimal design constrains its capacity to learn the complex and coherent temporal interactions inherent in cardiac image sequences. Here, we propose a novel approach to incorporating motion information in the DL segmentation networks: a computationally efficient yet robust Temporal Attention Module (TAM), modeled as a small, multi-headed, cross-temporal attention module, which can be plug-and-play inserted into a broad range of segmentation networks (CNN, transformer, or hybrid) without a drastic architecture modification. Extensive experiments on multiple cardiac imaging datasets, such as 2D echocardiography (CAMUS and EchoNet-Dynamic), 3D echocardiography (MITEA), and 3D cardiac MRI (ACDC), confirm that TAM consistently improves segmentation performance across datasets when added to a range of networks, including UNet, FCN8s, UNetR, SwinUNetR, and the recent I2UNet and DT-VNet. Integrating TAM into SAM yields a temporal SAM that reduces Hausdorff distance (HD) from 3.99 mm to 3.51 mm on the CAMUS dataset, while integrating TAM into a pre-trained MedSAM reduces HD from 3.04 to 2.06 pixels after fine-tuning on the EchoNet-Dynamic dataset. On the ACDC 3D dataset, our TAM-UNet and TAM-DT-VNet achieve substantial reductions in HD, from 7.97 mm to 4.23 mm and 6.87 mm to 4.74 mm, respectively. Additionally, TAM’s training does not require segmentation of ground truths from all time frames and can be achieved with sparse temporal annotation. TAM is thus a robust, generalizable, and adaptable solution for motion-awareness enhancement that is easily scaled from 2D to 3D. The code is available at https://github.com/kamruleee51/TAM.
心脏解剖分割是临床心功能评估和疾病诊断指导治疗和干预的重要依据。深度学习(DL)提高了心脏解剖分割的准确性,特别是当心脏运动动力学信息集成到网络中时。提出了几种融合运动信息的方法;然而,现有的方法还不是最优的:向输入数据添加时间维度会导致较高的计算成本,并且将配准合并到分割网络中仍然需要计算成本,并且可能受到配准错误的影响,特别是在非dl配准时。虽然基于注意力的运动建模很有前途,但次优设计限制了它学习心脏图像序列中固有的复杂和连贯的时间相互作用的能力。在这里,我们提出了一种将运动信息整合到深度学习分割网络中的新方法:一种计算效率高但鲁棒的时间注意模块(TAM),它被建模为一个小的、多头的、跨时间的注意模块,可以即插即用地插入到广泛的分割网络(CNN、变压器或混合)中,而无需剧烈的架构修改。在多个心脏成像数据集(如2D超声心动图(CAMUS和EchoNet-Dynamic)、3D超声心动图(MITEA)和3D心脏MRI (ACDC))上进行的大量实验证实,当将TAM添加到一系列网络(包括UNet、FCN8s、UNetR、SwinUNetR以及最近的I2UNet和DT-VNet)中时,TAM可以持续改善数据集的分割性能。在CAMUS数据集上,将TAM集成到SAM中产生的时间SAM将Hausdorff距离(HD)从3.99 mm降低到3.51 mm,而将TAM集成到预训练的MedSAM中,在EchoNet-Dynamic数据集上进行微调后,将HD从3.04像素降低到2.06像素。在ACDC 3D数据集上,我们的TAM-UNet和TAM-DT-VNet实现了高清的大幅降低,分别从7.97 mm降至4.23 mm和6.87 mm降至4.74 mm。此外,TAM的训练不需要从所有时间框架中分割基础事实,并且可以通过稀疏的时间注释来实现。因此,TAM是一种健壮的、通用的、适应性强的运动感知增强解决方案,可以很容易地从2D扩展到3D。代码可在https://github.com/kamruleee51/TAM上获得。
{"title":"An efficient, scalable, and adaptable plug-and-play temporal attention module for motion-guided cardiac segmentation with sparse temporal labels","authors":"Md Kamrul Hasan ,&nbsp;Guang Yang ,&nbsp;Choon Hwai Yap","doi":"10.1016/j.media.2026.103981","DOIUrl":"10.1016/j.media.2026.103981","url":null,"abstract":"<div><div>Cardiac anatomy segmentation is essential for clinical assessment of cardiac function and disease diagnosis to inform treatment and intervention. Deep learning (DL) has improved cardiac anatomy segmentation accuracy, especially when information on cardiac motion dynamics is integrated into the networks. Several methods for incorporating motion information have been proposed; however, existing methods are not yet optimal: adding the time dimension to input data causes high computational costs, and incorporating registration into the segmentation network remains computationally costly and can be affected by errors of registration, especially with non-DL registration. While attention-based motion modeling is promising, suboptimal design constrains its capacity to learn the complex and coherent temporal interactions inherent in cardiac image sequences. Here, we propose a novel approach to incorporating motion information in the DL segmentation networks: a computationally efficient yet robust Temporal Attention Module (TAM), modeled as a small, multi-headed, cross-temporal attention module, which can be plug-and-play inserted into a broad range of segmentation networks (CNN, transformer, or hybrid) without a drastic architecture modification. Extensive experiments on multiple cardiac imaging datasets, such as 2D echocardiography (CAMUS and EchoNet-Dynamic), 3D echocardiography (MITEA), and 3D cardiac MRI (ACDC), confirm that TAM consistently improves segmentation performance across datasets when added to a range of networks, including UNet, FCN8s, UNetR, SwinUNetR, and the recent I<sup>2</sup>UNet and DT-VNet. Integrating TAM into SAM yields a temporal SAM that reduces Hausdorff distance (HD) from 3.99 mm to 3.51 mm on the CAMUS dataset, while integrating TAM into a pre-trained MedSAM reduces HD from 3.04 to 2.06 pixels after fine-tuning on the EchoNet-Dynamic dataset. On the ACDC 3D dataset, our TAM-UNet and TAM-DT-VNet achieve substantial reductions in HD, from 7.97 mm to 4.23 mm and 6.87 mm to 4.74 mm, respectively. Additionally, TAM’s training does not require segmentation of ground truths from all time frames and can be achieved with sparse temporal annotation. TAM is thus a robust, generalizable, and adaptable solution for motion-awareness enhancement that is easily scaled from 2D to 3D. The code is available at <span><span>https://github.com/kamruleee51/TAM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103981"},"PeriodicalIF":11.8,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146146638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physics-informed graph neural networks for flow field estimation in carotid arteries 用于颈动脉流场估计的物理信息图神经网络
IF 11.8 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-07 DOI: 10.1016/j.media.2026.103974
Julian Suk , Dieuwertje Alblas , Barbara A. Hutten , Albert Wiegman , Christoph Brune , Pim van Ooij , Jelmer M. Wolterink
Hemodynamic quantities are valuable biomedical risk factors for cardiovascular pathology such as atherosclerosis. Non-invasive, in-vivo measurement of these quantities can only be performed using a select number of modalities that are not widely available, such as 4D flow magnetic resonance imaging (MRI). In this work, we create a surrogate model for hemodynamic flow field estimation, powered by machine learning. We train graph neural networks that include priors about the underlying symmetries and physics, limiting the amount of data required for training. This allows us to train the model using moderately-sized, in-vivo 4D flow MRI datasets, instead of large in-silico datasets obtained by computational fluid dynamics (CFD), as is the current standard. We create an efficient, equivariant neural network by combining the popular PointNet++ architecture with group-steerable layers. To incorporate the physics-informed priors, we derive an efficient discretisation scheme for the involved differential operators. We perform extensive experiments in carotid arteries and show that our model can accurately estimate low-noise hemodynamic flow fields in the carotid artery. Moreover, we show how the learned relation between geometry and hemodynamic quantities transfers to 3D vascular models obtained using a different imaging modality than the training data. This shows that physics-informed graph neural networks can be trained using 4D flow MRI data to estimate blood flow in unseen carotid artery geometries.
血液动力学是心血管疾病如动脉粥样硬化的重要生物医学危险因素。这些量的非侵入性体内测量只能使用一些不广泛使用的模式进行,例如4D流磁共振成像(MRI)。在这项工作中,我们创建了一个由机器学习驱动的血流动力学流场估计代理模型。我们训练包含关于底层对称性和物理先验的图神经网络,限制了训练所需的数据量。这使我们能够使用中等大小的体内4D流MRI数据集来训练模型,而不是使用当前标准的计算流体动力学(CFD)获得的大型计算机数据集。我们通过结合流行的PointNet++架构和组导向层,创建了一个高效的等变神经网络。为了结合物理信息先验,我们为所涉及的微分算子推导了一种有效的离散化方案。我们在颈动脉中进行了大量的实验,并表明我们的模型可以准确地估计颈动脉中的低噪声血流动力学流场。此外,我们展示了几何和血流动力学量之间的学习关系如何转移到使用不同的成像方式而不是训练数据获得的3D血管模型。这表明,物理信息图神经网络可以使用4D流MRI数据进行训练,以估计未见的颈动脉几何形状的血流量。
{"title":"Physics-informed graph neural networks for flow field estimation in carotid arteries","authors":"Julian Suk ,&nbsp;Dieuwertje Alblas ,&nbsp;Barbara A. Hutten ,&nbsp;Albert Wiegman ,&nbsp;Christoph Brune ,&nbsp;Pim van Ooij ,&nbsp;Jelmer M. Wolterink","doi":"10.1016/j.media.2026.103974","DOIUrl":"10.1016/j.media.2026.103974","url":null,"abstract":"<div><div>Hemodynamic quantities are valuable biomedical risk factors for cardiovascular pathology such as atherosclerosis. Non-invasive, in-vivo measurement of these quantities can only be performed using a select number of modalities that are not widely available, such as 4D flow magnetic resonance imaging (MRI). In this work, we create a surrogate model for hemodynamic flow field estimation, powered by machine learning. We train graph neural networks that include priors about the underlying symmetries and physics, limiting the amount of data required for training. This allows us to train the model using moderately-sized, in-vivo 4D flow MRI datasets, instead of large in-silico datasets obtained by computational fluid dynamics (CFD), as is the current standard. We create an efficient, equivariant neural network by combining the popular PointNet++ architecture with group-steerable layers. To incorporate the physics-informed priors, we derive an efficient discretisation scheme for the involved differential operators. We perform extensive experiments in carotid arteries and show that our model can accurately estimate low-noise hemodynamic flow fields in the carotid artery. Moreover, we show how the learned relation between geometry and hemodynamic quantities transfers to 3D vascular models obtained using a different imaging modality than the training data. This shows that physics-informed graph neural networks can be trained using 4D flow MRI data to estimate blood flow in unseen carotid artery geometries.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103974"},"PeriodicalIF":11.8,"publicationDate":"2026-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146138282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diversity-driven MG-MAE: Multi-granularity representation learning for non-salient object segmentation 多样性驱动的MG-MAE:非显著目标分割的多粒度表示学习
IF 11.8 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-06 DOI: 10.1016/j.media.2026.103971
Chengjin Yu , Bin Zhang , Chenchu Xu , Dongsheng Ruan , Rui Wang , Huafeng Liu , Xiaohu Li , Shuo Li
Masked Autoencoders (MAEs) have grown increasingly prominent as a powerful self-supervised learning paradigm. They are capable of effectively leveraging inherent image prior information and are gaining traction in the field of medical image analysis. However, their application to feature representations of the non-salient objects, such as microvasculature, accessory organs, and early-stage tumors–is fundamentally limited by dimensional collapse problem, which diminishes feature diversity critical for non-salient structure discrimination. To address this, we propose a Multi-Granularity Masked Autoencoder (MG-MAE) framework for feature diversity learning: (1) We extend the conventional MAE into a multi-granularity framework, a global branch reconstructs global pixels, with a local branch recovering Histogram of Oriented Gradients (HOG) features, enabling hierarchical representation of both coarse-grained and fine-grained patterns; (2) Critically, in the local branch, a diversity-enhanced loss function incorporating Nuclear Norm Maximization (NNM) constraint to explicitly mitigate feature space collapse through orthogonal embedding regularization; and (3) A Dynamic Weight Adjustment (DWA) strategy that dynamically prioritizes hard-to-reconstruct regions via entropy-driven gradient modulation. Comprehensive evaluations across five clinical benchmarks–CCTA139, BTCV, LiTS, ACDC, and MSD Pancreas Tumour datasets–demonstrate that MG-MAE achieves statistically significant improvements in Dice Similarity Coefficient (DSC) scores for non-salient object segmentation, outperforming state-of-the-art methods. The code is available at https://github.com/zhangbbin/mgmae.
蒙面自编码器(MAEs)作为一种强大的自监督学习范式已经变得越来越突出。它们能够有效地利用固有的图像先验信息,并在医学图像分析领域获得牵引力。然而,它们在非显著物体(如微血管、附属器官和早期肿瘤)特征表征中的应用,从根本上受到了维度塌陷问题的限制,这降低了对非显著结构识别至关重要的特征多样性。为了解决这个问题,我们提出了一个用于特征多样性学习的多粒度掩膜自动编码器(MG-MAE)框架:(1)我们将传统的MAE扩展到一个多粒度框架,一个全局分支重建全局像素,一个局部分支恢复定向梯度直方图(HOG)特征,实现粗粒度和细粒度模式的分层表示;(2)关键的是,在局部分支中,引入核范数最大化(NNM)约束的多样性增强损失函数通过正交嵌入正则化显式减轻特征空间崩溃;(3)基于熵驱动梯度调制的动态权重调整(DWA)策略,对难以重构的区域进行动态优先排序。五个临床基准(ccta139、BTCV、LiTS、ACDC和MSD胰腺肿瘤数据集)的综合评估表明,MG-MAE在非显著目标分割的骰子相似系数(DSC)得分方面取得了统计学上显著的改善,优于最先进的方法。代码可在https://github.com/zhangbbin/mgmae上获得。
{"title":"Diversity-driven MG-MAE: Multi-granularity representation learning for non-salient object segmentation","authors":"Chengjin Yu ,&nbsp;Bin Zhang ,&nbsp;Chenchu Xu ,&nbsp;Dongsheng Ruan ,&nbsp;Rui Wang ,&nbsp;Huafeng Liu ,&nbsp;Xiaohu Li ,&nbsp;Shuo Li","doi":"10.1016/j.media.2026.103971","DOIUrl":"10.1016/j.media.2026.103971","url":null,"abstract":"<div><div>Masked Autoencoders (MAEs) have grown increasingly prominent as a powerful self-supervised learning paradigm. They are capable of effectively leveraging inherent image prior information and are gaining traction in the field of medical image analysis. However, their application to feature representations of the non-salient objects, such as microvasculature, accessory organs, and early-stage tumors–is fundamentally limited by dimensional collapse problem, which diminishes feature diversity critical for non-salient structure discrimination. To address this, we propose a Multi-Granularity Masked Autoencoder (MG-MAE) framework for feature diversity learning: (1) We extend the conventional MAE into a multi-granularity framework, a global branch reconstructs global pixels, with a local branch recovering Histogram of Oriented Gradients (HOG) features, enabling hierarchical representation of both coarse-grained and fine-grained patterns; (2) Critically, in the local branch, a diversity-enhanced loss function incorporating Nuclear Norm Maximization (NNM) constraint to explicitly mitigate feature space collapse through orthogonal embedding regularization; and (3) A Dynamic Weight Adjustment (DWA) strategy that dynamically prioritizes hard-to-reconstruct regions via entropy-driven gradient modulation. Comprehensive evaluations across five clinical benchmarks–CCTA139, BTCV, LiTS, ACDC, and MSD Pancreas Tumour datasets–demonstrate that MG-MAE achieves statistically significant improvements in Dice Similarity Coefficient (DSC) scores for non-salient object segmentation, outperforming state-of-the-art methods. The code is available at <span><span>https://github.com/zhangbbin/mgmae</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103971"},"PeriodicalIF":11.8,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146134049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAM-Driven Cross Prompting with Adaptive Sampling Consistency for Semi-supervised Medical Image Segmentation 基于自适应采样一致性的sam驱动交叉提示半监督医学图像分割
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-04 DOI: 10.1016/j.media.2026.103973
Juzheng Miao, Cheng Chen, Yuchen Yuan, Quanzheng Li, Pheng-Ann Heng
{"title":"SAM-Driven Cross Prompting with Adaptive Sampling Consistency for Semi-supervised Medical Image Segmentation","authors":"Juzheng Miao, Cheng Chen, Yuchen Yuan, Quanzheng Li, Pheng-Ann Heng","doi":"10.1016/j.media.2026.103973","DOIUrl":"https://doi.org/10.1016/j.media.2026.103973","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"73 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Slot-BERT: Self-supervised object discovery in surgical video Slot-BERT:手术视频中的自我监督对象发现
IF 11.8 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-03 DOI: 10.1016/j.media.2026.103972
Guiqiu Liao , Matjaž Jogan , Marcel Hussing , Kenta Nakahashi , Kazuhiro Yasufuku , Amin Madani , Eric Eaton , Daniel A. Hashimoto
Object-centric slot attention is a powerful framework for unsupervised learning of structured and explainable representations that can support reasoning about objects and actions, including in surgical video. However, current object-centric models either fail to reliably capture object dependencies in seconds-long video episodes that encompass surgical actions and tasks or are computationally too expensive for practical implementation. We introduce Slot-BERT, a slot attention model with a temporal slot transformer module to overcome these limitations. Our core innovations are: 1) A bidirectional transformer module that processes object-centric slot representations, enabling longer-range temporal coherence; 2) A slot-contrastive loss that further improves the representation by enforcing slot dissimilarity; 3) We evaluate Slot-BERT on real-world surgical video datasets from abdominal, cholecystectomy, and thoracic procedures, and on real and synthetic videos with everyday objects. Our method surpasses state-of-the-art object-centric approaches under unsupervised training achieving superior performance across these domains. We also demonstrate efficient zero-shot domain adaptation to data from diverse surgical specialties and databases.
以对象为中心的槽注意是一个强大的框架,用于结构化和可解释表示的无监督学习,可以支持对对象和动作的推理,包括在手术视频中。然而,当前以对象为中心的模型要么无法可靠地捕获包含外科手术动作和任务的几秒钟长的视频片段中的对象依赖性,要么在计算上过于昂贵,无法实际实现。为了克服这些限制,我们引入了插槽注意力模型slot - bert,这是一个带有时序插槽变压器模块的插槽注意力模型。我们的核心创新是:1)处理以对象为中心的槽表示的双向变压器模块,实现更远距离的时间相干性;2)插槽对比损失,通过加强插槽不相似性进一步改善表征;3)我们在真实的手术视频数据集上评估了Slot-BERT,这些视频数据集来自腹部、胆囊切除术和胸部手术,以及真实的和日常物品的合成视频。我们的方法在无监督训练下超越了最先进的以对象为中心的方法,在这些领域实现了卓越的性能。我们还演示了对来自不同外科专业和数据库的数据的有效零射击域适应。
{"title":"Slot-BERT: Self-supervised object discovery in surgical video","authors":"Guiqiu Liao ,&nbsp;Matjaž Jogan ,&nbsp;Marcel Hussing ,&nbsp;Kenta Nakahashi ,&nbsp;Kazuhiro Yasufuku ,&nbsp;Amin Madani ,&nbsp;Eric Eaton ,&nbsp;Daniel A. Hashimoto","doi":"10.1016/j.media.2026.103972","DOIUrl":"10.1016/j.media.2026.103972","url":null,"abstract":"<div><div>Object-centric slot attention is a powerful framework for unsupervised learning of structured and explainable representations that can support reasoning about objects and actions, including in surgical video. However, current object-centric models either fail to reliably capture object dependencies in seconds-long video episodes that encompass surgical actions and tasks or are computationally too expensive for practical implementation. We introduce Slot-BERT, a slot attention model with a temporal slot transformer module to overcome these limitations. Our core innovations are: 1) A bidirectional transformer module that processes object-centric slot representations, enabling longer-range temporal coherence; 2) A slot-contrastive loss that further improves the representation by enforcing slot dissimilarity; 3) We evaluate Slot-BERT on real-world surgical video datasets from abdominal, cholecystectomy, and thoracic procedures, and on real and synthetic videos with everyday objects. Our method surpasses state-of-the-art object-centric approaches under unsupervised training achieving superior performance across these domains. We also demonstrate efficient zero-shot domain adaptation to data from diverse surgical specialties and databases.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103972"},"PeriodicalIF":11.8,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WSISum: WSI summarization via dual-level semantic reconstruction 基于双层语义重构的WSI摘要
IF 11.8 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-02 DOI: 10.1016/j.media.2026.103970
Baizhi Wang , Kun Zhang , Yuhao Wang , Yunjie Gu , Haijing Luan , Ying Zhou , Taiyuan Hu , Rundong Wang , Zhidong Yang , Zihang Jiang , Rui Yan , S. Kevin Zhou
Each gigapixel whole slide image (WSI) contains tens of thousands of patches, many of which are redundant, leading to significant computational, storage, and transmission overhead. This motivates the need for automatic WSI summarization, which aims to extract a compact subset of patches that can effectively approximate the original WSI. In this paper, we propose WSISum, a unified framework that performs WSI Summarization through dual-level semantic reconstruction. Specifically, WSISum integrates two complementary reconstruction strategies: low-level patch semantic reconstruction via clustering-based sparse sampling; and high-level slide semantic reconstruction through knowledge distillation from multiple WSI-level foundation models. Experimental results show that WSISum achieves satisfactory performance in a variety of downstream tasks, including cancer subtyping, biomarker prediction, and metastasis subtyping, while significantly reducing computational cost. Code and models are available at https://github.com/Badgewho/WSISum.
每个十亿像素的整张幻灯片图像(WSI)包含成千上万个补丁,其中许多是冗余的,导致大量的计算、存储和传输开销。这激发了对自动WSI总结的需求,其目的是提取一个紧凑的补丁子集,可以有效地近似原始WSI。在本文中,我们提出了一个统一的框架WSISum,该框架通过双层语义重构来完成WSI摘要。具体而言,WSISum集成了两种互补的重建策略:通过基于聚类的稀疏采样进行低级补丁语义重建;通过从多个wsi级基础模型中提取知识,进行高层次滑动语义重构。实验结果表明,WSISum在癌症亚型分型、生物标志物预测和转移亚型分型等多种下游任务中取得了令人满意的性能,同时显著降低了计算成本。代码和模型可在https://github.com/Badgewho/WSISum上获得。
{"title":"WSISum: WSI summarization via dual-level semantic reconstruction","authors":"Baizhi Wang ,&nbsp;Kun Zhang ,&nbsp;Yuhao Wang ,&nbsp;Yunjie Gu ,&nbsp;Haijing Luan ,&nbsp;Ying Zhou ,&nbsp;Taiyuan Hu ,&nbsp;Rundong Wang ,&nbsp;Zhidong Yang ,&nbsp;Zihang Jiang ,&nbsp;Rui Yan ,&nbsp;S. Kevin Zhou","doi":"10.1016/j.media.2026.103970","DOIUrl":"10.1016/j.media.2026.103970","url":null,"abstract":"<div><div>Each gigapixel whole slide image (WSI) contains tens of thousands of patches, many of which are redundant, leading to significant computational, storage, and transmission overhead. This motivates the need for automatic WSI summarization, which aims to extract a compact subset of patches that can effectively approximate the original WSI. In this paper, we propose <strong>WSISum</strong>, a unified framework that performs <strong>WSI Sum</strong>marization through dual-level semantic reconstruction. Specifically, WSISum integrates two complementary reconstruction strategies: <em>low-level patch semantic reconstruction</em> via clustering-based sparse sampling; and <em>high-level slide semantic reconstruction</em> through knowledge distillation from multiple WSI-level foundation models. Experimental results show that WSISum achieves satisfactory performance in a variety of downstream tasks, including cancer subtyping, biomarker prediction, and metastasis subtyping, while significantly reducing computational cost. Code and models are available at <span><span>https://github.com/Badgewho/WSISum</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103970"},"PeriodicalIF":11.8,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image analysis
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1