首页 > 最新文献

Medical image analysis最新文献

英文 中文
Artifact-suppressed 3D Retinal Microvascular Segmentation via Multi-scale Topology Regulation 基于多尺度拓扑调节的伪影抑制视网膜三维微血管分割
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-10 DOI: 10.1016/j.media.2026.103988
Ting Luo, Jinxian Zhang, Tao Chen, Zhouyan He, Yanda Meng, Mengting Liu, Jiong Zhang, Dan Zhang
Optical coherence tomography angiography (OCTA) enables non-invasive visualization of retinal microvasculature, and accurate 3D vessel segmentation is essential for quantifying biomarkers critical for early diagnosis and monitoring of diabetic retinopathy. However, reliable 3D OCTA segmentation is hindered by capillary invisibility, complex vascular topology, and motion artifacts, which compromise biomarker accuracy. Furthermore, the scarcity of manually annotated 3D OCTA microvascular data constrains methodological development. To address this challenge, we introduce our publicly accessible 3D microvascular dataset and propose MT-Net, a multi-view, topology-aware 3D retinal microvascular segmentation network. First, a novel dimension transformation strategy is employed to enhance topological accuracy by effectively encoding spatial dependencies across multiple planes. Second, to mitigate the impact of motion artifacts, we introduce a unidirectional Artifact Suppression Module (ASM) that selectively suppresses noise along the B-scan direction. Third, a Twin-Cross Attention Module (TCAM), guided by vessel centerlines, is designed to enhance the continuity and completeness of segmented vessels by reinforcing cross-view contextual information. Experiments on two 3D OCTA datasets show that MT-Net achieves state-of-the-art accuracy and topological consistency, with strong generalizability validated by cross-dataset analysis. We plan to release our manual annotations to facilitate future research in retinal OCTA segmentation.
光学相干断层扫描血管造影(OCTA)可以实现视网膜微血管的无创可视化,准确的3D血管分割对于量化糖尿病视网膜病变早期诊断和监测的生物标志物至关重要。然而,可靠的3D OCTA分割受到毛细管不可见性、复杂的血管拓扑和运动伪影的阻碍,这些都会影响生物标志物的准确性。此外,手工注释的3D OCTA微血管数据的缺乏限制了方法的发展。为了解决这一挑战,我们引入了可公开访问的3D微血管数据集,并提出了MT-Net,一个多视图、拓扑感知的3D视网膜微血管分割网络。首先,采用一种新颖的维变换策略,通过对多平面的空间依赖进行有效编码来提高拓扑精度。其次,为了减轻运动伪影的影响,我们引入了一个单向伪影抑制模块(ASM),它可以选择性地抑制沿b扫描方向的噪声。第三,设计以血管中心线为导向的双交叉注意模块(TCAM),通过强化交叉视图上下文信息来增强分割血管的连续性和完整性。在两个三维OCTA数据集上进行的实验表明,MT-Net达到了最先进的精度和拓扑一致性,并通过跨数据集分析验证了其强大的泛化能力。我们计划发布我们的手工注释,以促进视网膜OCTA分割的未来研究。
{"title":"Artifact-suppressed 3D Retinal Microvascular Segmentation via Multi-scale Topology Regulation","authors":"Ting Luo, Jinxian Zhang, Tao Chen, Zhouyan He, Yanda Meng, Mengting Liu, Jiong Zhang, Dan Zhang","doi":"10.1016/j.media.2026.103988","DOIUrl":"https://doi.org/10.1016/j.media.2026.103988","url":null,"abstract":"Optical coherence tomography angiography (OCTA) enables non-invasive visualization of retinal microvasculature, and accurate 3D vessel segmentation is essential for quantifying biomarkers critical for early diagnosis and monitoring of diabetic retinopathy. However, reliable 3D OCTA segmentation is hindered by capillary invisibility, complex vascular topology, and motion artifacts, which compromise biomarker accuracy. Furthermore, the scarcity of manually annotated 3D OCTA microvascular data constrains methodological development. To address this challenge, we introduce our publicly accessible 3D microvascular dataset and propose MT-Net, a multi-view, topology-aware 3D retinal microvascular segmentation network. First, a novel dimension transformation strategy is employed to enhance topological accuracy by effectively encoding spatial dependencies across multiple planes. Second, to mitigate the impact of motion artifacts, we introduce a unidirectional Artifact Suppression Module (ASM) that selectively suppresses noise along the B-scan direction. Third, a Twin-Cross Attention Module (TCAM), guided by vessel centerlines, is designed to enhance the continuity and completeness of segmented vessels by reinforcing cross-view contextual information. Experiments on two 3D OCTA datasets show that MT-Net achieves state-of-the-art accuracy and topological consistency, with strong generalizability validated by cross-dataset analysis. We plan to release our manual annotations to facilitate future research in retinal OCTA segmentation.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"11 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146146686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MICCAI STS 2024 Challenge: Semi-Supervised Instance-Level Tooth Segmentation in Panoramic X-ray and CBCT Images MICCAI STS 2024挑战:全景x射线和CBCT图像的半监督实例级牙齿分割
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-09 DOI: 10.1016/j.media.2026.103986
Yaqi Wang, Zhi Li, Chengyu Wu, Jun Liu, Yifan Zhang, Jiaxue Ni, Qian Luo, Jialuo Chen, Hongyuan Zhang, Jin Liu, Can Han, Kaiwen Fu, Changkai Ji, Xinxu Cai, Jing Hao, Zhihao Zheng, Shi Xu, Junqiang Chen, Xiaoyang Yu, Qianni Zhang, Dahong Qian, Shuai Wang, Huiyu Zhou
Orthopantomogram (OPGs) and Cone-Beam Computed Tomography (CBCT) are vital for dentistry, but creating large datasets for automated tooth segmentation is hindered by the labor-intensive process of manual instance-level annotation. This research aimed to benchmark and advance semi-supervised learning (SSL) as a solution for this data scarcity problem. We organized the 2nd Semi-supervised Teeth Segmentation (STS 2024) Challenge at MICCAI 2024. We provided a large-scale dataset comprising over 90,000 2D images and 3D axial slices, which includes 2,380 OPG images and 330 CBCT scans, all featuring detailed instance-level FDI annotations on part of the data. The challenge attracted 114 (OPG) and 106 (CBCT) registered teams. To ensure algorithmic excellence and full transparency, we rigorously evaluated the valid, open-source submissions from the top 10 (OPG) and top 5 (CBCT) teams, respectively. All successful submissions were deep learning-based SSL methods. The winning semi-supervised models demonstrated impressive performance gains over a fully-supervised nnU-Net baseline trained only on the labeled data. For the 2D OPG track, the top method improved the Instance Affinity (IA) score by over 44 percentage points. For the 3D CBCT track, the winning approach boosted the Instance Dice score by 61 percentage points. This challenge demonstrates the potential benefit benefit of SSL for complex, instance-level medical image segmentation tasks where labeled data is scarce. The most effective approaches consistently leveraged hybrid semi-supervised frameworks that combined knowledge from foundational models like SAM with multi-stage, coarse-to-fine refinement pipelines. Both the challenge dataset and the participants’ submitted code have been made publicly available on GitHub (https://github.com/ricoleehduu/STS-Challenge-2024), ensuring transparency and reproducibility.
骨科断层扫描(OPGs)和锥形束计算机断层扫描(CBCT)对牙科至关重要,但创建用于自动牙齿分割的大型数据集受到人工实例级注释的劳动密集型过程的阻碍。本研究旨在对半监督学习(SSL)进行基准测试和推进,以此作为数据稀缺问题的解决方案。我们在MICCAI 2024举办了第二届半监督牙齿分割挑战赛(STS 2024)。我们提供了一个大型数据集,包括超过90,000张2D图像和3D轴向切片,其中包括2,380张OPG图像和330张CBCT扫描,所有数据都具有部分数据的详细实例级FDI注释。该挑战吸引了114个(OPG)和106个(CBCT)注册团队。为了确保算法的卓越性和充分的透明度,我们分别严格评估了来自前10名(OPG)和前5名(CBCT)团队的有效开源提交。所有成功提交的都是基于深度学习的SSL方法。获胜的半监督模型比仅在标记数据上训练的完全监督的nnU-Net基线表现出令人印象深刻的性能提升。对于2D OPG轨道,顶部方法将实例亲和力(IA)分数提高了44个百分点以上。对于3D CBCT赛道,获胜方法将Instance Dice得分提高了61个百分点。这一挑战显示了SSL对于标记数据稀缺的复杂的实例级医学图像分割任务的潜在好处。最有效的方法一直是利用混合半监督框架,将来自SAM等基础模型的知识与多级、从粗到细的细化管道相结合。挑战数据集和参与者提交的代码都已在GitHub (https://github.com/ricoleehduu/STS-Challenge-2024)上公开提供,以确保透明度和可重复性。
{"title":"MICCAI STS 2024 Challenge: Semi-Supervised Instance-Level Tooth Segmentation in Panoramic X-ray and CBCT Images","authors":"Yaqi Wang, Zhi Li, Chengyu Wu, Jun Liu, Yifan Zhang, Jiaxue Ni, Qian Luo, Jialuo Chen, Hongyuan Zhang, Jin Liu, Can Han, Kaiwen Fu, Changkai Ji, Xinxu Cai, Jing Hao, Zhihao Zheng, Shi Xu, Junqiang Chen, Xiaoyang Yu, Qianni Zhang, Dahong Qian, Shuai Wang, Huiyu Zhou","doi":"10.1016/j.media.2026.103986","DOIUrl":"https://doi.org/10.1016/j.media.2026.103986","url":null,"abstract":"Orthopantomogram (OPGs) and Cone-Beam Computed Tomography (CBCT) are vital for dentistry, but creating large datasets for automated tooth segmentation is hindered by the labor-intensive process of manual instance-level annotation. This research aimed to benchmark and advance semi-supervised learning (SSL) as a solution for this data scarcity problem. We organized the 2nd Semi-supervised Teeth Segmentation (STS 2024) Challenge at MICCAI 2024. We provided a large-scale dataset comprising over 90,000 2D images and 3D axial slices, which includes 2,380 OPG images and 330 CBCT scans, all featuring detailed instance-level FDI annotations on part of the data. The challenge attracted 114 (OPG) and 106 (CBCT) registered teams. To ensure algorithmic excellence and full transparency, we rigorously evaluated the valid, open-source submissions from the top 10 (OPG) and top 5 (CBCT) teams, respectively. All successful submissions were deep learning-based SSL methods. The winning semi-supervised models demonstrated impressive performance gains over a fully-supervised nnU-Net baseline trained only on the labeled data. For the 2D OPG track, the top method improved the Instance Affinity (IA) score by over 44 percentage points. For the 3D CBCT track, the winning approach boosted the Instance Dice score by 61 percentage points. This challenge demonstrates the potential benefit benefit of SSL for complex, instance-level medical image segmentation tasks where labeled data is scarce. The most effective approaches consistently leveraged hybrid semi-supervised frameworks that combined knowledge from foundational models like SAM with multi-stage, coarse-to-fine refinement pipelines. Both the challenge dataset and the participants’ submitted code have been made publicly available on GitHub (<ce:inter-ref xlink:href=\"https://github.com/ricoleehduu/STS-Challenge-2024\" xlink:type=\"simple\">https://github.com/ricoleehduu/STS-Challenge-2024</ce:inter-ref>), ensuring transparency and reproducibility.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"108 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146146639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient, Scalable, and Adaptable Plug-and-Play Temporal Attention Module for Motion-Guided Cardiac Segmentation with Sparse Temporal Labels 一种高效,可扩展,适应性强的即插即用时间注意模块,用于运动引导心脏分割稀疏时间标签
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-09 DOI: 10.1016/j.media.2026.103981
Md Kamrul Hasan, Guang Yang, Choon Hwai Yap
Cardiac anatomy segmentation is essential for clinical assessment of cardiac function and disease diagnosis to inform treatment and intervention. Deep learning (DL) has improved cardiac anatomy segmentation accuracy, especially when information on cardiac motion dynamics is integrated into the networks. Several methods for incorporating motion information have been proposed; however, existing methods are not yet optimal: adding the time dimension to input data causes high computational costs, and incorporating registration into the segmentation network remains computationally costly and can be affected by errors of registration, especially with non-DL registration. While attention-based motion modeling is promising, suboptimal design constrains its capacity to learn the complex and coherent temporal interactions inherent in cardiac image sequences. Here, we propose a novel approach to incorporating motion information in the DL segmentation networks: a computationally efficient yet robust Temporal Attention Module (TAM), modeled as a small, multi-headed, cross-temporal attention module, which can be plug-and-play inserted into a broad range of segmentation networks (CNN, transformer, or hybrid) without a drastic architecture modification. Extensive experiments on multiple cardiac imaging datasets, such as 2D echocardiography (CAMUS and EchoNet-Dynamic), 3D echocardiography (MITEA), and 3D cardiac MRI (ACDC), confirm that TAM consistently improves segmentation performance across datasets when added to a range of networks, including UNet, FCN8s, UNetR, SwinUNetR, and the recent I2UNet and DT-VNet. Integrating TAM into SAM yields a temporal SAM that reduces Hausdorff distance (HD) from 3.99 mm to 3.51 mm on the CAMUS dataset, while integrating TAM into a pre-trained MedSAM reduces HD from 3.04 to 2.06 pixels after fine-tuning on the EchoNet-Dynamic dataset. On the ACDC 3D dataset, our TAM-UNet and TAM-DT-VNet achieve substantial reductions in HD, from 7.97 mm to 4.23 mm and 6.87 mm to 4.74 mm, respectively. Additionally, TAM’s training does not require segmentation of ground truths from all time frames and can be achieved with sparse temporal annotation. TAM is thus a robust, generalizable, and adaptable solution for motion-awareness enhancement that is easily scaled from 2D to 3D. The code is available at https://github.com/kamruleee51/TAM.
心脏解剖分割是临床心功能评估和疾病诊断指导治疗和干预的重要依据。深度学习(DL)提高了心脏解剖分割的准确性,特别是当心脏运动动力学信息集成到网络中时。提出了几种融合运动信息的方法;然而,现有的方法还不是最优的:向输入数据添加时间维度会导致较高的计算成本,并且将配准合并到分割网络中仍然需要计算成本,并且可能受到配准错误的影响,特别是在非dl配准时。虽然基于注意力的运动建模很有前途,但次优设计限制了它学习心脏图像序列中固有的复杂和连贯的时间相互作用的能力。在这里,我们提出了一种将运动信息整合到深度学习分割网络中的新方法:一种计算效率高但鲁棒的时间注意模块(TAM),它被建模为一个小的、多头的、跨时间的注意模块,可以即插即用地插入到广泛的分割网络(CNN、变压器或混合)中,而无需剧烈的架构修改。在多个心脏成像数据集(如2D超声心动图(CAMUS和EchoNet-Dynamic)、3D超声心动图(MITEA)和3D心脏MRI (ACDC))上进行的大量实验证实,当将TAM添加到一系列网络(包括UNet、FCN8s、UNetR、SwinUNetR以及最近的I2UNet和DT-VNet)中时,TAM可以持续改善数据集的分割性能。在CAMUS数据集上,将TAM集成到SAM中产生的时间SAM将Hausdorff距离(HD)从3.99 mm降低到3.51 mm,而将TAM集成到预训练的MedSAM中,在EchoNet-Dynamic数据集上进行微调后,将HD从3.04像素降低到2.06像素。在ACDC 3D数据集上,我们的TAM-UNet和TAM-DT-VNet实现了高清的大幅降低,分别从7.97 mm降至4.23 mm和6.87 mm降至4.74 mm。此外,TAM的训练不需要从所有时间框架中分割基础事实,并且可以通过稀疏的时间注释来实现。因此,TAM是一种健壮的、通用的、适应性强的运动感知增强解决方案,可以很容易地从2D扩展到3D。代码可在https://github.com/kamruleee51/TAM上获得。
{"title":"An Efficient, Scalable, and Adaptable Plug-and-Play Temporal Attention Module for Motion-Guided Cardiac Segmentation with Sparse Temporal Labels","authors":"Md Kamrul Hasan, Guang Yang, Choon Hwai Yap","doi":"10.1016/j.media.2026.103981","DOIUrl":"https://doi.org/10.1016/j.media.2026.103981","url":null,"abstract":"Cardiac anatomy segmentation is essential for clinical assessment of cardiac function and disease diagnosis to inform treatment and intervention. Deep learning (DL) has improved cardiac anatomy segmentation accuracy, especially when information on cardiac motion dynamics is integrated into the networks. Several methods for incorporating motion information have been proposed; however, existing methods are not yet optimal: adding the time dimension to input data causes high computational costs, and incorporating registration into the segmentation network remains computationally costly and can be affected by errors of registration, especially with non-DL registration. While attention-based motion modeling is promising, suboptimal design constrains its capacity to learn the complex and coherent temporal interactions inherent in cardiac image sequences. Here, we propose a novel approach to incorporating motion information in the DL segmentation networks: a computationally efficient yet robust Temporal Attention Module (TAM), modeled as a small, multi-headed, cross-temporal attention module, which can be plug-and-play inserted into a broad range of segmentation networks (CNN, transformer, or hybrid) without a drastic architecture modification. Extensive experiments on multiple cardiac imaging datasets, such as 2D echocardiography (CAMUS and EchoNet-Dynamic), 3D echocardiography (MITEA), and 3D cardiac MRI (ACDC), confirm that TAM consistently improves segmentation performance across datasets when added to a range of networks, including UNet, FCN8s, UNetR, SwinUNetR, and the recent I<ce:sup loc=\"post\">2</ce:sup>UNet and DT-VNet. Integrating TAM into SAM yields a temporal SAM that reduces Hausdorff distance (HD) from 3.99 mm to 3.51 mm on the CAMUS dataset, while integrating TAM into a pre-trained MedSAM reduces HD from 3.04 to 2.06 pixels after fine-tuning on the EchoNet-Dynamic dataset. On the ACDC 3D dataset, our TAM-UNet and TAM-DT-VNet achieve substantial reductions in HD, from 7.97 mm to 4.23 mm and 6.87 mm to 4.74 mm, respectively. Additionally, TAM’s training does not require segmentation of ground truths from all time frames and can be achieved with sparse temporal annotation. TAM is thus a robust, generalizable, and adaptable solution for motion-awareness enhancement that is easily scaled from 2D to 3D. The code is available at <ce:inter-ref xlink:href=\"https://github.com/kamruleee51/TAM\" xlink:type=\"simple\">https://github.com/kamruleee51/TAM</ce:inter-ref>.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"22 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146146638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physics-informed graph neural networks for flow field estimation in carotid arteries 用于颈动脉流场估计的物理信息图神经网络
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-07 DOI: 10.1016/j.media.2026.103974
Julian Suk, Dieuwertje Alblas, Barbara Hutten, Albert Wiegman, Christoph Brune, Pim Van Ooij, Jelmer M. Wolterink
{"title":"Physics-informed graph neural networks for flow field estimation in carotid arteries","authors":"Julian Suk, Dieuwertje Alblas, Barbara Hutten, Albert Wiegman, Christoph Brune, Pim Van Ooij, Jelmer M. Wolterink","doi":"10.1016/j.media.2026.103974","DOIUrl":"https://doi.org/10.1016/j.media.2026.103974","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"4 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146138282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diversity-Driven MG-MAE: Multi-Granularity Representation Learning for Non-Salient Object Segmentation 多样性驱动的MG-MAE:非显著目标分割的多粒度表示学习
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-06 DOI: 10.1016/j.media.2026.103971
Chengjin Yu, Bin Zhang, Chenchu Xu, Dongsheng Ruan, Rui Wang, Huafeng Liu, Xiaohu Li, Shuo Li
{"title":"Diversity-Driven MG-MAE: Multi-Granularity Representation Learning for Non-Salient Object Segmentation","authors":"Chengjin Yu, Bin Zhang, Chenchu Xu, Dongsheng Ruan, Rui Wang, Huafeng Liu, Xiaohu Li, Shuo Li","doi":"10.1016/j.media.2026.103971","DOIUrl":"https://doi.org/10.1016/j.media.2026.103971","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"23 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146134049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAM-Driven Cross Prompting with Adaptive Sampling Consistency for Semi-supervised Medical Image Segmentation 基于自适应采样一致性的sam驱动交叉提示半监督医学图像分割
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-04 DOI: 10.1016/j.media.2026.103973
Juzheng Miao, Cheng Chen, Yuchen Yuan, Quanzheng Li, Pheng-Ann Heng
{"title":"SAM-Driven Cross Prompting with Adaptive Sampling Consistency for Semi-supervised Medical Image Segmentation","authors":"Juzheng Miao, Cheng Chen, Yuchen Yuan, Quanzheng Li, Pheng-Ann Heng","doi":"10.1016/j.media.2026.103973","DOIUrl":"https://doi.org/10.1016/j.media.2026.103973","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"73 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Slot-BERT: Self-Supervised Object Discovery in Surgical Video Slot-BERT:手术视频中的自我监督对象发现
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-03 DOI: 10.1016/j.media.2026.103972
Guiqiu Liao, Matjaž Jogan, Marcel Hussing, Kenta Nakahashi, Kazuhiro Yasufuku, Amin Madani, Eric eaton, Daniel A. Hashimoto
{"title":"Slot-BERT: Self-Supervised Object Discovery in Surgical Video","authors":"Guiqiu Liao, Matjaž Jogan, Marcel Hussing, Kenta Nakahashi, Kazuhiro Yasufuku, Amin Madani, Eric eaton, Daniel A. Hashimoto","doi":"10.1016/j.media.2026.103972","DOIUrl":"https://doi.org/10.1016/j.media.2026.103972","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"89 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WSISum: WSI summarization via dual-level semantic reconstruction 基于双层语义重构的WSI摘要
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-02 DOI: 10.1016/j.media.2026.103970
Baizhi Wang, Kun Zhang, Yuhao Wang, Yunjie Gu, Haijing Luan, Ying Zhou, Taiyuan Hu, Rundong Wang, Zhidong Yang, Zihang Jiang, Rui Yan, S. Kevin Zhou
{"title":"WSISum: WSI summarization via dual-level semantic reconstruction","authors":"Baizhi Wang, Kun Zhang, Yuhao Wang, Yunjie Gu, Haijing Luan, Ying Zhou, Taiyuan Hu, Rundong Wang, Zhidong Yang, Zihang Jiang, Rui Yan, S. Kevin Zhou","doi":"10.1016/j.media.2026.103970","DOIUrl":"https://doi.org/10.1016/j.media.2026.103970","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"1 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ESM-AnatTractNet: advanced deep learning model of true positive eloquent white matter tractography to improve preoperative evaluation of pediatric epilepsy surgery ESM-AnatTractNet:一种先进的真阳性白质束造影深度学习模型,用于改善小儿癫痫手术术前评估
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-30 DOI: 10.1016/j.media.2026.103969
Min-Hee Lee, Bohan Xiao, Soumyanil Banerjee, Hiroshi Uda, Yoonho Hwang, Csaba Juhász, Eishi Asano, Ming Dong, Jeong-Won Jeong
{"title":"ESM-AnatTractNet: advanced deep learning model of true positive eloquent white matter tractography to improve preoperative evaluation of pediatric epilepsy surgery","authors":"Min-Hee Lee, Bohan Xiao, Soumyanil Banerjee, Hiroshi Uda, Yoonho Hwang, Csaba Juhász, Eishi Asano, Ming Dong, Jeong-Won Jeong","doi":"10.1016/j.media.2026.103969","DOIUrl":"https://doi.org/10.1016/j.media.2026.103969","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"86 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146072191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DDTracking: A Diffusion Model-Based Deep Generative Framework with Local-Global Spatiotemporal Modeling for Diffusion MRI Tractography DDTracking:一种基于扩散模型的局部-全局时空建模的深度生成框架
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.media.2026.103967
Yijie Li, Wei Zhang, Xi Zhu, Ye Wu, Yogesh Rathi, Lauren J. O’Donnell, Fan Zhang
{"title":"DDTracking: A Diffusion Model-Based Deep Generative Framework with Local-Global Spatiotemporal Modeling for Diffusion MRI Tractography","authors":"Yijie Li, Wei Zhang, Xi Zhu, Ye Wu, Yogesh Rathi, Lauren J. O’Donnell, Fan Zhang","doi":"10.1016/j.media.2026.103967","DOIUrl":"https://doi.org/10.1016/j.media.2026.103967","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"117 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146071490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image analysis
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1