首页 > 最新文献

IEEE Transactions on Medical Imaging最新文献

英文 中文
Feature Decomposition via Shared Low-rank Matrix Recovery for CT Report Generation. 基于共享低秩矩阵恢复的CT报告生成特征分解。
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-11-03 DOI: 10.1109/tmi.2025.3628159
Yuanhe Tian,Yan Song
Generating reports for medical images is an important task in medical automation that not only provides valuable objective diagnostic evidence but also alleviates the workload of radiologists. Many existing studies focus on chest X-rays that typically consist of one or a few images, where less attention is paid to other medical image types, such as computed tomography (CT) that contain a large number of continuous images. Many studies on CT report generation (CTRG) rely on convolutional networks or standard Transformers to model CT slice representation and combine them to obtain CT features, yet relatively little research has focused on subtle lesion features and volumetric continuity. In this paper, we propose shared low-rank matrix recovery (S-LMR) to decompose CT slices into shared anatomical patterns and lesion-focused features, together with continuous slice encoding (CSE) to explicitly model inter-slice continuity and capture progressive changes across adjacent slices, which are subsequently integrated with a large language model (LLM) for report generation. Specifically, the S-LMR separates the common patterns from the sparse lesion-focused features to highlight clinically significant information. Based on the outputs of S-LMR, CSE captures inter-slice relationships within a dedicated Transformer encoder and aligns the resulting visual features with textual information, thereby instructing the LLM to produce a CT report. Experiment results on benchmark datasets for CTRG show that our approach outperforms strong baselines and existing models, demonstrating state-of-the-art performance. Analyses further confirm that S-LMR and CSE effectively capture key evidence, leading to more accurate CTRG.
医学图像生成报告是医疗自动化中的一项重要任务,它不仅提供了有价值的客观诊断证据,而且减轻了放射科医生的工作量。许多现有的研究都集中在通常由一张或几张图像组成的胸部x光片上,而对其他医学图像类型的关注较少,例如包含大量连续图像的计算机断层扫描(CT)。许多CT报告生成(CTRG)的研究依赖于卷积网络或标准transformer对CT切片表示进行建模并将其组合以获得CT特征,而针对细微病变特征和体积连续性的研究相对较少。在本文中,我们提出了共享低秩矩阵恢复(S-LMR)来将CT切片分解为共享的解剖模式和病变聚焦特征,并使用连续切片编码(CSE)来明确建模切片间的连续性并捕获相邻切片之间的渐进变化,随后将其与大型语言模型(LLM)集成以生成报告。具体来说,S-LMR将常见模式从稀疏的病灶集中特征中分离出来,以突出临床有意义的信息。基于S-LMR的输出,CSE在专用Transformer编码器中捕获片间关系,并将生成的视觉特征与文本信息对齐,从而指示LLM生成CT报告。在CTRG基准数据集上的实验结果表明,我们的方法优于强基线和现有模型,展示了最先进的性能。分析进一步证实,S-LMR和CSE有效捕获关键证据,从而获得更准确的CTRG。
{"title":"Feature Decomposition via Shared Low-rank Matrix Recovery for CT Report Generation.","authors":"Yuanhe Tian,Yan Song","doi":"10.1109/tmi.2025.3628159","DOIUrl":"https://doi.org/10.1109/tmi.2025.3628159","url":null,"abstract":"Generating reports for medical images is an important task in medical automation that not only provides valuable objective diagnostic evidence but also alleviates the workload of radiologists. Many existing studies focus on chest X-rays that typically consist of one or a few images, where less attention is paid to other medical image types, such as computed tomography (CT) that contain a large number of continuous images. Many studies on CT report generation (CTRG) rely on convolutional networks or standard Transformers to model CT slice representation and combine them to obtain CT features, yet relatively little research has focused on subtle lesion features and volumetric continuity. In this paper, we propose shared low-rank matrix recovery (S-LMR) to decompose CT slices into shared anatomical patterns and lesion-focused features, together with continuous slice encoding (CSE) to explicitly model inter-slice continuity and capture progressive changes across adjacent slices, which are subsequently integrated with a large language model (LLM) for report generation. Specifically, the S-LMR separates the common patterns from the sparse lesion-focused features to highlight clinically significant information. Based on the outputs of S-LMR, CSE captures inter-slice relationships within a dedicated Transformer encoder and aligns the resulting visual features with textual information, thereby instructing the LLM to produce a CT report. Experiment results on benchmark datasets for CTRG show that our approach outperforms strong baselines and existing models, demonstrating state-of-the-art performance. Analyses further confirm that S-LMR and CSE effectively capture key evidence, leading to more accurate CTRG.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"1 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145433843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MCS-Stain: Boosting FFPE-to-HE Virtual Staining with Multiple Cell Semantics. MCS-Stain:用多细胞语义增强ffpe到he的虚拟染色。
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-11-03 DOI: 10.1109/tmi.2025.3628174
Yihuang Hu,Zhicheng Du,Weiping Lin,Shurong Yang,Lequan Yu,Guojun Zhang,Liansheng Wang
The diagnosis of cancer primarily relies on pathological slides stained with hematoxylin and eosin (HE). These slides are typically prepared from tissue samples that have been fixed in formalin and embedded in paraffin (FFPE). However, the traditional process of staining FFPE samples with HE is time-consuming and resource-intensive. Recent advances in virtual staining technologies, driven by digital pathology and generative models, offer a promising alternative. However, the blurred structures in FFPE images pose unique challenges to achieving high-quality FFPE-to-HE virtual staining. In this context, we developed a novel Multiple Cell Semantics-guided supervised generative adversarial model, MCS-Stain. Specifically, the guidance consists of three components: (1) pretrained cell semantic guidance, aligning the powerful intermediate features of real and virtual images, embedded in the pretrained cell segmentation model (PCSM); (2) cell mask guidance, introducing comprehensible cell information which serves as part of the input to the discriminator through channel concatenation; (3) dynamic cell semantic guidance, aligning the dynamic intermediate features embedded in the generator during training. The comparative results on FFPE-to-HE datasets demonstrated that MCS-Stain outperforms existing state-of-the-art (SOTA) methods with substantial qualitative and quantitative improvements. Results across various PCSMs and data sources further confirmed its effectiveness and robustness. Notably, the dynamic cell semantic exhibits strong potential beyond FFPE-to-HE virtual staining, further demonstrated by virtual staining from HE images to immunohistochemical (IHC) images. In general, MCS-Stain presents a promising avenue to advance virtual staining techniques. Code is available at https://github.com/huyihuang/MCS-Stain.
癌症的诊断主要依靠苏木精和伊红(HE)染色的病理切片。这些载玻片通常是由用福尔马林固定并包埋在石蜡(FFPE)中的组织样品制备的。然而,用HE染色FFPE样品的传统方法耗时且资源密集。在数字病理学和生成模型的推动下,虚拟染色技术的最新进展提供了一个有希望的选择。然而,FFPE图像中的模糊结构对实现高质量的FFPE- he虚拟染色提出了独特的挑战。在此背景下,我们开发了一种新的多细胞语义引导监督生成对抗模型MCS-Stain。具体来说,该引导包括三个部分:(1)将预训练的细胞语义引导,对准真实和虚拟图像的强大中间特征,嵌入到预训练的细胞分割模型(PCSM)中;(2)单元掩码引导,通过信道拼接引入可理解的单元信息,作为鉴别器输入的一部分;(3)动态单元语义引导,在训练过程中对齐嵌入在生成器中的动态中间特征。在FFPE-to-HE数据集上的比较结果表明,MCS-Stain在定性和定量方面都有实质性的改进,优于现有的最先进(SOTA)方法。各种PCSMs和数据来源的结果进一步证实了其有效性和稳健性。值得注意的是,动态细胞语义表现出强大的潜力,超越了ffpe到HE的虚拟染色,进一步证明了从HE图像到免疫组织化学(IHC)图像的虚拟染色。总的来说,MCS-Stain提出了一个有前途的途径来推进虚拟染色技术。代码可从https://github.com/huyihuang/MCS-Stain获得。
{"title":"MCS-Stain: Boosting FFPE-to-HE Virtual Staining with Multiple Cell Semantics.","authors":"Yihuang Hu,Zhicheng Du,Weiping Lin,Shurong Yang,Lequan Yu,Guojun Zhang,Liansheng Wang","doi":"10.1109/tmi.2025.3628174","DOIUrl":"https://doi.org/10.1109/tmi.2025.3628174","url":null,"abstract":"The diagnosis of cancer primarily relies on pathological slides stained with hematoxylin and eosin (HE). These slides are typically prepared from tissue samples that have been fixed in formalin and embedded in paraffin (FFPE). However, the traditional process of staining FFPE samples with HE is time-consuming and resource-intensive. Recent advances in virtual staining technologies, driven by digital pathology and generative models, offer a promising alternative. However, the blurred structures in FFPE images pose unique challenges to achieving high-quality FFPE-to-HE virtual staining. In this context, we developed a novel Multiple Cell Semantics-guided supervised generative adversarial model, MCS-Stain. Specifically, the guidance consists of three components: (1) pretrained cell semantic guidance, aligning the powerful intermediate features of real and virtual images, embedded in the pretrained cell segmentation model (PCSM); (2) cell mask guidance, introducing comprehensible cell information which serves as part of the input to the discriminator through channel concatenation; (3) dynamic cell semantic guidance, aligning the dynamic intermediate features embedded in the generator during training. The comparative results on FFPE-to-HE datasets demonstrated that MCS-Stain outperforms existing state-of-the-art (SOTA) methods with substantial qualitative and quantitative improvements. Results across various PCSMs and data sources further confirmed its effectiveness and robustness. Notably, the dynamic cell semantic exhibits strong potential beyond FFPE-to-HE virtual staining, further demonstrated by virtual staining from HE images to immunohistochemical (IHC) images. In general, MCS-Stain presents a promising avenue to advance virtual staining techniques. Code is available at https://github.com/huyihuang/MCS-Stain.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"69 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145433842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerating Volumetric Medical Image Annotation via Short-Long Memory SAM 2 基于长短时记忆的医学图像体积标注加速研究
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-11-03 DOI: 10.1109/tmi.2025.3627954
Yuwen Chen, Zafer Yildiz, Qihang Li, Yaqian Chen, Haoyu Dong, Hanxue Gu, Nicholas Konz, Maciej A. Mazurowski
{"title":"Accelerating Volumetric Medical Image Annotation via Short-Long Memory SAM 2","authors":"Yuwen Chen, Zafer Yildiz, Qihang Li, Yaqian Chen, Haoyu Dong, Hanxue Gu, Nicholas Konz, Maciej A. Mazurowski","doi":"10.1109/tmi.2025.3627954","DOIUrl":"https://doi.org/10.1109/tmi.2025.3627954","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"33 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145434217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sub-0.5 mm Resolution PET Scanner with 3-Layer DOI Detectors for Rodent Neuroimaging. 用于啮齿动物神经成像的分辨率低于0.5 mm的PET扫描仪与3层DOI探测器。
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-10-31 DOI: 10.1109/tmi.2025.3627815
Han Gyu Kang,Hideaki Tashima,Hidekatsu Wakizaka,Makoto Higuchi,Taiga Yamaya
Spatial resolution is the most important parameter for preclinical positron emission tomography (PET) to visualize mouse brain function with high quantification accuracy. However, the spatial resolution of PET has been limited to over 0.5 mm, which causes a substantial partial volume effect especially for small mouse brain structures. In this study, we present the initial results of a mouse brain dedicated PET scanner that can achieve sub-0.5 mm resolution. The ring diameter and axial coverage of the PET scanner are 48 mm and 23.4 mm. To encode depth-of-interaction (DOI) information, 3-layers of lutetium yttrium oxyorthosilicate crystals were stacked in a staggered configuration and coupled to a 5×5 array of silicon photomultipliers having a pixel pitch of 2.4 mm. The crystal pitch and total thickness are 0.8 mm and 11 mm. The PET performance was characterized according to the National Electrical Manufacturers Association NU4-2008 standard. In vivo mouse brain imaging was carried out with 18F-FITM and 18F-FDG tracers. The average radial resolution from the center to 10 mm offset was 0.67±0.06 mm with filtered back projection. The 0.45 mm diameter rods were identified clearly with an iterative reconstruction algorithm. To the best of our knowledge, this is the first separate identification of the hypothalamus, amygdala, and cerebellar nuclei of mouse brain. The developed PET scanner achieved sub-0.5 mm resolution thereby visualizing small mouse brain structures with high quantification accuracy.
空间分辨率是临床前正电子发射断层扫描(PET)可视化小鼠脑功能的最重要参数,具有较高的定量精度。然而,PET的空间分辨率被限制在0.5 mm以上,这导致了大量的部分体积效应,特别是对于小老鼠的大脑结构。在这项研究中,我们提出了一个小鼠大脑专用PET扫描仪的初步结果,可以达到低于0.5毫米的分辨率。PET扫描仪的环径为48 mm,轴向覆盖面积为23.4 mm。为了编码相互作用深度(DOI)信息,将3层氧化硅酸镥钇晶体错开堆叠,并与像素间距为2.4 mm的5×5硅光电倍增管阵列耦合。晶体间距为0.8 mm,总厚度为11 mm。PET性能按照美国电气制造商协会NU4-2008标准进行表征。用18F-FITM和18F-FDG示踪剂对小鼠进行体内脑成像。从中心到10 mm偏移的平均径向分辨率为0.67±0.06 mm。采用迭代重建算法对直径0.45 mm的棒材进行了清晰的识别。据我们所知,这是第一次单独鉴定小鼠大脑的下丘脑、杏仁核和小脑核。开发的PET扫描仪达到了低于0.5毫米的分辨率,从而以高定量精度可视化小老鼠的大脑结构。
{"title":"Sub-0.5 mm Resolution PET Scanner with 3-Layer DOI Detectors for Rodent Neuroimaging.","authors":"Han Gyu Kang,Hideaki Tashima,Hidekatsu Wakizaka,Makoto Higuchi,Taiga Yamaya","doi":"10.1109/tmi.2025.3627815","DOIUrl":"https://doi.org/10.1109/tmi.2025.3627815","url":null,"abstract":"Spatial resolution is the most important parameter for preclinical positron emission tomography (PET) to visualize mouse brain function with high quantification accuracy. However, the spatial resolution of PET has been limited to over 0.5 mm, which causes a substantial partial volume effect especially for small mouse brain structures. In this study, we present the initial results of a mouse brain dedicated PET scanner that can achieve sub-0.5 mm resolution. The ring diameter and axial coverage of the PET scanner are 48 mm and 23.4 mm. To encode depth-of-interaction (DOI) information, 3-layers of lutetium yttrium oxyorthosilicate crystals were stacked in a staggered configuration and coupled to a 5×5 array of silicon photomultipliers having a pixel pitch of 2.4 mm. The crystal pitch and total thickness are 0.8 mm and 11 mm. The PET performance was characterized according to the National Electrical Manufacturers Association NU4-2008 standard. In vivo mouse brain imaging was carried out with 18F-FITM and 18F-FDG tracers. The average radial resolution from the center to 10 mm offset was 0.67±0.06 mm with filtered back projection. The 0.45 mm diameter rods were identified clearly with an iterative reconstruction algorithm. To the best of our knowledge, this is the first separate identification of the hypothalamus, amygdala, and cerebellar nuclei of mouse brain. The developed PET scanner achieved sub-0.5 mm resolution thereby visualizing small mouse brain structures with high quantification accuracy.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"76 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145411488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Residual Compensation Model for Unsupervised PET Partial Volume Correction. 无监督PET部分体积校正的深度残差补偿模型。
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-10-31 DOI: 10.1109/tmi.2025.3627516
Jianan Cui,Jiankai Wu,Zhongxue Wu,Jianzhong He,Qingrun Zeng,Zan Chen,Yuanjing Feng
Partial volume effect (PVE) arises from the limited spatial resolution of positron emission tomography (PET) scanners, causing significant quantitative biases that hinder accurate metabolic activity assessment. To address these problems, we proposed an unsupervised deep residual compensation model (U-DRCM) for PET partial volume correction (PVC). U-DRCM first predicted an initial blur kernel for the PVE-affected PET image based on a conditional blind deconvolution module (CBD module). Then, a conditional residual compensation module (CRC module) was introduced to compensate for the error caused by inaccurate blur kernel prediction. The whole model is unsupervised which only needs a single patient's PET image as the training label and the corresponding MR image as the network input. The performance of U-DRCM was evaluated against several established PVC approaches, including Richardson-Lucy (RL), reblurred Van-Cittert (RVC), iterative Yang (IY), neural blind deconvolution (NBD), and deep convolutional neural network (DeepPVC) using both simulated BrainWeb phantom and real clinical datasets. In the simulation study, U-DRCM consistently outperformed competing methods across multiple quantitative metrics, achieved a higher peak signal-to-noise ratio (PSNR), an improved structural similarity index (SSIM), and a lower root mean square error (RMSE). For the real clinical study, U-DRCM delivered substantial improvements in standardized uptake value (SUV) and standardized uptake value ratio (SUVR) across various brain volumes of interest (VOIs). Experimental results show that U-DRCM effectively mitigates the impact of PVE, resulting in high-quality PVC PET images with enhanced brain visualization.
部分体积效应(PVE)源于正电子发射断层扫描(PET)扫描仪有限的空间分辨率,导致严重的定量偏差,阻碍了准确的代谢活动评估。为了解决这些问题,我们提出了一种用于PET部分体积校正(PVC)的无监督深度残余补偿模型(U-DRCM)。U-DRCM首先基于条件盲反卷积模块(CBD模块)预测了pve影响的PET图像的初始模糊核。然后,引入条件残差补偿模块(CRC模块)对模糊核预测不准确造成的误差进行补偿。整个模型是无监督的,只需要单个患者的PET图像作为训练标签,相应的MR图像作为网络输入。U-DRCM的性能通过几种已建立的PVC方法进行评估,包括Richardson-Lucy (RL)、reblurred Van-Cittert (RVC)、迭代Yang (IY)、神经盲反卷积(NBD)和深度卷积神经网络(DeepPVC),使用模拟的BrainWeb模型和真实的临床数据集。在模拟研究中,U-DRCM在多个定量指标上始终优于竞争方法,实现了更高的峰值信噪比(PSNR)、改进的结构相似指数(SSIM)和更低的均方根误差(RMSE)。在真正的临床研究中,U-DRCM在不同感兴趣脑容量(VOIs)的标准化摄取值(SUV)和标准化摄取值比(SUVR)方面取得了实质性的改善。实验结果表明,U-DRCM有效减轻了PVE的影响,获得了高质量的PVC PET图像,增强了大脑的可视化。
{"title":"Deep Residual Compensation Model for Unsupervised PET Partial Volume Correction.","authors":"Jianan Cui,Jiankai Wu,Zhongxue Wu,Jianzhong He,Qingrun Zeng,Zan Chen,Yuanjing Feng","doi":"10.1109/tmi.2025.3627516","DOIUrl":"https://doi.org/10.1109/tmi.2025.3627516","url":null,"abstract":"Partial volume effect (PVE) arises from the limited spatial resolution of positron emission tomography (PET) scanners, causing significant quantitative biases that hinder accurate metabolic activity assessment. To address these problems, we proposed an unsupervised deep residual compensation model (U-DRCM) for PET partial volume correction (PVC). U-DRCM first predicted an initial blur kernel for the PVE-affected PET image based on a conditional blind deconvolution module (CBD module). Then, a conditional residual compensation module (CRC module) was introduced to compensate for the error caused by inaccurate blur kernel prediction. The whole model is unsupervised which only needs a single patient's PET image as the training label and the corresponding MR image as the network input. The performance of U-DRCM was evaluated against several established PVC approaches, including Richardson-Lucy (RL), reblurred Van-Cittert (RVC), iterative Yang (IY), neural blind deconvolution (NBD), and deep convolutional neural network (DeepPVC) using both simulated BrainWeb phantom and real clinical datasets. In the simulation study, U-DRCM consistently outperformed competing methods across multiple quantitative metrics, achieved a higher peak signal-to-noise ratio (PSNR), an improved structural similarity index (SSIM), and a lower root mean square error (RMSE). For the real clinical study, U-DRCM delivered substantial improvements in standardized uptake value (SUV) and standardized uptake value ratio (SUVR) across various brain volumes of interest (VOIs). Experimental results show that U-DRCM effectively mitigates the impact of PVE, resulting in high-quality PVC PET images with enhanced brain visualization.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"68 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145411484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prompting Lipschitz-constrained network for multiple-in-one sparse-view CT reconstruction. 促使lipschitz约束网络用于多合一稀疏视图CT重建。
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-10-30 DOI: 10.1109/tmi.2025.3627305
Baoshun Shi,Ke Jiang,Qiusheng Lian,Xinran Yu,Huazhu Fu
Despite significant advancements in deep learning-based sparse-view computed tomography (SVCT) reconstruction algorithms, these methods still encounter two primary limitations: (i) It is challenging to explicitly prove that the prior networks of deep unfolding algorithms satisfy Lipschitz constraints due to their empirically designed nature. (ii) The substantial storage costs of training a separate model for each setting in the case of multiple views hinder practical clinical applications. To address these issues, we elaborate an explicitly provable Lipschitz-constrained network, dubbed LipNet, and integrate an explicit prompt module to provide discriminative knowledge of different sparse sampling settings, enabling the treatment of multiple sparse view configurations within a single model. Furthermore, we develop a storage-saving deep unfolding framework for multiple-in-one SVCT reconstruction, termed PromptCT, which embeds LipNet as its prior network to ensure the convergence of its corresponding iterative algorithm. In simulated and real data experiments, PromptCT outperforms benchmark reconstruction algorithms in multiple-in-one SVCT reconstruction, achieving higher-quality reconstructions with lower storage costs. On the theoretical side, we explicitly demonstrate that LipNet satisfies boundary property, further proving its Lipschitz continuity and subsequently analyzing the convergence of the proposed iterative algorithms. The data and code are publicly available at https://github.com/shibaoshun/PromptCT.
尽管基于深度学习的稀疏视图计算机断层扫描(SVCT)重建算法取得了重大进展,但这些方法仍然遇到两个主要限制:(i)由于其经验设计的性质,明确证明深度展开算法的先验网络满足Lipschitz约束是具有挑战性的。(ii)在多视图的情况下,为每个场景训练一个单独的模型的大量存储成本阻碍了实际的临床应用。为了解决这些问题,我们精心设计了一个显式可证明的lipschitz约束网络,称为LipNet,并集成了一个显式提示模块,以提供不同稀疏采样设置的判别知识,从而能够在单个模型中处理多个稀疏视图配置。此外,我们开发了一种节省存储的用于多合一SVCT重建的深度展开框架PromptCT,该框架嵌入LipNet作为其先验网络,以确保其相应迭代算法的收敛性。在模拟和真实数据实验中,PromptCT在多合一SVCT重建中优于基准重建算法,以更低的存储成本实现更高质量的重建。在理论方面,我们明确地证明了LipNet满足边界性质,进一步证明了其Lipschitz连续性,并分析了所提出迭代算法的收敛性。数据和代码可在https://github.com/shibaoshun/PromptCT上公开获取。
{"title":"Prompting Lipschitz-constrained network for multiple-in-one sparse-view CT reconstruction.","authors":"Baoshun Shi,Ke Jiang,Qiusheng Lian,Xinran Yu,Huazhu Fu","doi":"10.1109/tmi.2025.3627305","DOIUrl":"https://doi.org/10.1109/tmi.2025.3627305","url":null,"abstract":"Despite significant advancements in deep learning-based sparse-view computed tomography (SVCT) reconstruction algorithms, these methods still encounter two primary limitations: (i) It is challenging to explicitly prove that the prior networks of deep unfolding algorithms satisfy Lipschitz constraints due to their empirically designed nature. (ii) The substantial storage costs of training a separate model for each setting in the case of multiple views hinder practical clinical applications. To address these issues, we elaborate an explicitly provable Lipschitz-constrained network, dubbed LipNet, and integrate an explicit prompt module to provide discriminative knowledge of different sparse sampling settings, enabling the treatment of multiple sparse view configurations within a single model. Furthermore, we develop a storage-saving deep unfolding framework for multiple-in-one SVCT reconstruction, termed PromptCT, which embeds LipNet as its prior network to ensure the convergence of its corresponding iterative algorithm. In simulated and real data experiments, PromptCT outperforms benchmark reconstruction algorithms in multiple-in-one SVCT reconstruction, achieving higher-quality reconstructions with lower storage costs. On the theoretical side, we explicitly demonstrate that LipNet satisfies boundary property, further proving its Lipschitz continuity and subsequently analyzing the convergence of the proposed iterative algorithms. The data and code are publicly available at https://github.com/shibaoshun/PromptCT.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"6 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145403771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identification of Genetic Risk Factors Based on Disease Progression Derived From Modeling Longitudinal Phenotype Latent Pattern Representation. 基于纵向表型潜在模式表征模型的疾病进展的遗传风险因素识别。
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-10-30 DOI: 10.1109/tmi.2025.3627406
Meiling Wang,Wei Shao,Daoqiang Zhang,Qingshan Liu
The characteristic of neurodegenerative disorders is the progressive impairment of memory and other cognitive functions. However, these existing imaging genetic methods only use longitudinal imaging phenotypes straightforwardly, ignoring the latent pattern of the longitudinal data in the progression process. The phenotypes across multiple time-points may exhibit the latent pattern that can be used to facilitate the understanding of the progression process. Accordingly, in this paper, we explore underlying complementary information from multiple time-points and simultaneously seek the underlying latent representation. With the complementarity of multiple time-points, the latent representation depicts data more comprehensively than each individual time-point, therefore mining effective longitudinal phenotype latent pattern representation. Specifically, we first propose two latent pattern representation (LPR) for longitudinal imaging phenotypes: linear LPR (lLPR), based on linear relationships between latent representation and each time-point, and nonlinear LPR (nonlLPR), based on neural networks to deal with nonlinear relationships. Then, we calculate the imaging genetic association based on the latent pattern representation. Finally, we conduct the experiments on both synthetic and real longitudinal imaging genetic data. Related experimental results validate that our proposed approach outperforms several competing algorithms, establishes strong associations, and discovers consistent longitudinal imaging genetic biomarkers, thereby guiding disease interpretation.
神经退行性疾病的特点是记忆力和其他认知功能的进行性损害。然而,这些现有的成像遗传方法只直接使用纵向成像表型,忽略了纵向数据在进展过程中的潜在模式。跨越多个时间点的表型可能表现出潜在的模式,可以用来促进对进展过程的理解。因此,在本文中,我们从多个时间点探索潜在的互补信息,同时寻求潜在的潜在表示。由于多个时间点的互补性,潜在表征比单个时间点更全面地描述数据,从而挖掘出有效的纵向表型潜在模式表征。具体而言,我们首先提出了纵向成像表型的两种潜在模式表示(LPR):基于潜在表示与每个时间点之间线性关系的线性LPR (lLPR)和基于神经网络处理非线性关系的非线性LPR (nonlLPR)。然后,我们计算基于潜在模式表示的成像遗传关联。最后,我们对合成和真实的纵向成像遗传数据进行了实验。相关实验结果证实,我们提出的方法优于几种竞争算法,建立了强关联,并发现了一致的纵向成像遗传生物标志物,从而指导疾病解释。
{"title":"Identification of Genetic Risk Factors Based on Disease Progression Derived From Modeling Longitudinal Phenotype Latent Pattern Representation.","authors":"Meiling Wang,Wei Shao,Daoqiang Zhang,Qingshan Liu","doi":"10.1109/tmi.2025.3627406","DOIUrl":"https://doi.org/10.1109/tmi.2025.3627406","url":null,"abstract":"The characteristic of neurodegenerative disorders is the progressive impairment of memory and other cognitive functions. However, these existing imaging genetic methods only use longitudinal imaging phenotypes straightforwardly, ignoring the latent pattern of the longitudinal data in the progression process. The phenotypes across multiple time-points may exhibit the latent pattern that can be used to facilitate the understanding of the progression process. Accordingly, in this paper, we explore underlying complementary information from multiple time-points and simultaneously seek the underlying latent representation. With the complementarity of multiple time-points, the latent representation depicts data more comprehensively than each individual time-point, therefore mining effective longitudinal phenotype latent pattern representation. Specifically, we first propose two latent pattern representation (LPR) for longitudinal imaging phenotypes: linear LPR (lLPR), based on linear relationships between latent representation and each time-point, and nonlinear LPR (nonlLPR), based on neural networks to deal with nonlinear relationships. Then, we calculate the imaging genetic association based on the latent pattern representation. Finally, we conduct the experiments on both synthetic and real longitudinal imaging genetic data. Related experimental results validate that our proposed approach outperforms several competing algorithms, establishes strong associations, and discovers consistent longitudinal imaging genetic biomarkers, thereby guiding disease interpretation.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"154 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145403774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Alignment and Imputation Network (AINet) for Breast Cancer Diagnosis with Multimodal Multi-view Ultrasound Images. 基于多模态多视点超声图像的乳腺癌诊断对准与输入网络(AINet)
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-10-24 DOI: 10.1109/tmi.2025.3625254
Haoyuan Chen,Yonghao Li,Jiadong Zhang,Long Yang,Yiqun Sun,Yaling Chen,Shichong Zhou,Zhenhui Li,Xuejun Qian,Qi Xu,Dinggang Shen
Recently, numerous deep learning models have been proposed for breast cancer diagnosis using multimodal multi-view ultrasound images. However, their performance could be highly affected by overlooking interactions between different modalities and views. Moreover, existing methods struggle to handle cases where certain modalities or views are missing, which limits their clinical applications. To address these issues, we propose a novel Alignment and Imputation Network (AINet) by integrating 1) alignment and imputation pre-training, and 2) hierarchical fusion fine-tuning. Specifically, in the pre-training stage, cross-modal contrastive learning is employed to align features across different modalities, for effectively capturing inter-modal interactions. To simulate missing modality (view) scenarios, we randomly mask out features and then impute them by leveraging inter-modal and inter-view relationships. Following the clinical diagnosis procedure, the subsequent fine-tuning stage further incorporates modality-level and view-level fusion in a hierarchical manner. The proposed AINet is developed and evaluated on three datasets, comprising 15,223 subjects in total. Experimental results demonstrate that AINet significantly outperforms state-of-the-art methods, particularly in handling missing modalities (views). This highlights its robustness and potential for real-world clinical applications.
最近,许多深度学习模型被提出用于使用多模态多视图超声图像进行乳腺癌诊断。然而,由于忽略了不同模式和视图之间的相互作用,它们的性能可能受到很大影响。此外,现有的方法难以处理某些模式或观点缺失的病例,这限制了它们的临床应用。为了解决这些问题,我们提出了一种新的对准和插补网络(AINet),该网络集成了1)对准和插补预训练和2)分层融合微调。具体而言,在预训练阶段,采用跨模态对比学习来对齐不同模态的特征,以有效地捕获跨模态的相互作用。为了模拟缺失的模态(视图)场景,我们随机屏蔽掉特征,然后利用模态间和视图间的关系来估算它们。在临床诊断程序之后,随后的微调阶段以分层方式进一步纳入模式级和视图级融合。提议的AINet是在三个数据集上开发和评估的,总共包括15,223个主题。实验结果表明,AINet显著优于最先进的方法,特别是在处理缺失模态(视图)方面。这突出了其稳健性和现实世界临床应用的潜力。
{"title":"An Alignment and Imputation Network (AINet) for Breast Cancer Diagnosis with Multimodal Multi-view Ultrasound Images.","authors":"Haoyuan Chen,Yonghao Li,Jiadong Zhang,Long Yang,Yiqun Sun,Yaling Chen,Shichong Zhou,Zhenhui Li,Xuejun Qian,Qi Xu,Dinggang Shen","doi":"10.1109/tmi.2025.3625254","DOIUrl":"https://doi.org/10.1109/tmi.2025.3625254","url":null,"abstract":"Recently, numerous deep learning models have been proposed for breast cancer diagnosis using multimodal multi-view ultrasound images. However, their performance could be highly affected by overlooking interactions between different modalities and views. Moreover, existing methods struggle to handle cases where certain modalities or views are missing, which limits their clinical applications. To address these issues, we propose a novel Alignment and Imputation Network (AINet) by integrating 1) alignment and imputation pre-training, and 2) hierarchical fusion fine-tuning. Specifically, in the pre-training stage, cross-modal contrastive learning is employed to align features across different modalities, for effectively capturing inter-modal interactions. To simulate missing modality (view) scenarios, we randomly mask out features and then impute them by leveraging inter-modal and inter-view relationships. Following the clinical diagnosis procedure, the subsequent fine-tuning stage further incorporates modality-level and view-level fusion in a hierarchical manner. The proposed AINet is developed and evaluated on three datasets, comprising 15,223 subjects in total. Experimental results demonstrate that AINet significantly outperforms state-of-the-art methods, particularly in handling missing modalities (views). This highlights its robustness and potential for real-world clinical applications.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"1 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145357891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EviVLM: When Evidential Learning Meets Vision Language Model for Medical Image Segmentation. EviVLM:基于证据学习和视觉语言模型的医学图像分割。
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-10-16 DOI: 10.1109/tmi.2025.3622492
Qingtao Pan,Zhengrong Li,Guang Yang,Qing Yang,Bing Ji
The disparity between image and text representations, often referred to as the modality gap, remains a significant obstacle for Vision Language Models (VLMs) in medical image segmentation. This gap complicates multi-modal fusion, thereby restricting segmentation performance. To address this challenge, we propose Evidence-driven Vision Language Model (EviVLM)-a novel paradigm that integrates Evidential Learning (EL) into VLMs to systematically measure and mitigate the modality gap for enhanced multi-modal fusion. To drive this paradigm, an Evidence Affinity Map Generator (EAMG) is proposed to collect complementary cross-modal evidences by learning a global cross-modal affinity map, thus refining modality-specific evidence embedding. An Evidence Differential Similarity Learning (EDSL) is further proposed to collect consistent cross-modal evidences by performing Bias-Variance Decomposition on differential matrix derived from bidirectional similarity matrices between image and text evidence embeddings. Finally, the subjective logic is used for mapping the collected evidences to opinions, and the Dempster-Shafer's theory based combination rule is introduced for opinion aggregation, thereby quantifying the modality gap and facilitating effective multi-modal integration. Experimental results on three public medical image segmentation datasets validate that the proposed EviVLM can achieve state-of-the-art performance. Code is available at: https://github.com/QingtaoPan/EviVLM.
图像和文本表示之间的差异通常被称为模态差距,是视觉语言模型(VLMs)在医学图像分割中的一个重要障碍。这种差距使多模态融合变得复杂,从而限制了分割性能。为了应对这一挑战,我们提出了证据驱动视觉语言模型(EviVLM)——一种将证据学习(EL)集成到vlm中的新范式,以系统地测量和减轻模态差距,从而增强多模态融合。为了推动这一范式,提出了一个证据亲和图生成器(EAMG),通过学习全局跨模态亲和图来收集互补的跨模态证据,从而改进特定模态的证据嵌入。进一步提出了一种证据差分相似学习(EDSL)方法,通过对图像和文本证据嵌入之间双向相似矩阵的微分矩阵进行偏差方差分解来收集一致的跨模态证据。最后,利用主观逻辑将收集到的证据映射为意见,并引入基于Dempster-Shafer理论的组合规则进行意见聚合,从而量化模态差距,促进有效的多模态整合。在三个公共医学图像分割数据集上的实验结果验证了所提出的EviVLM可以达到最先进的性能。代码可从https://github.com/QingtaoPan/EviVLM获得。
{"title":"EviVLM: When Evidential Learning Meets Vision Language Model for Medical Image Segmentation.","authors":"Qingtao Pan,Zhengrong Li,Guang Yang,Qing Yang,Bing Ji","doi":"10.1109/tmi.2025.3622492","DOIUrl":"https://doi.org/10.1109/tmi.2025.3622492","url":null,"abstract":"The disparity between image and text representations, often referred to as the modality gap, remains a significant obstacle for Vision Language Models (VLMs) in medical image segmentation. This gap complicates multi-modal fusion, thereby restricting segmentation performance. To address this challenge, we propose Evidence-driven Vision Language Model (EviVLM)-a novel paradigm that integrates Evidential Learning (EL) into VLMs to systematically measure and mitigate the modality gap for enhanced multi-modal fusion. To drive this paradigm, an Evidence Affinity Map Generator (EAMG) is proposed to collect complementary cross-modal evidences by learning a global cross-modal affinity map, thus refining modality-specific evidence embedding. An Evidence Differential Similarity Learning (EDSL) is further proposed to collect consistent cross-modal evidences by performing Bias-Variance Decomposition on differential matrix derived from bidirectional similarity matrices between image and text evidence embeddings. Finally, the subjective logic is used for mapping the collected evidences to opinions, and the Dempster-Shafer's theory based combination rule is introduced for opinion aggregation, thereby quantifying the modality gap and facilitating effective multi-modal integration. Experimental results on three public medical image segmentation datasets validate that the proposed EviVLM can achieve state-of-the-art performance. Code is available at: https://github.com/QingtaoPan/EviVLM.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"92 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145305637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pelvic Fracture Reduction Planning via Joint Shape-Intensity Reference 通过关节形状-强度参考制定骨盆骨折复位计划
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-10-15 DOI: 10.1109/tmi.2025.3621670
Xirui Zhao, Deqiang Xiao, Teng Zhang, Long Shao, Danni Ai, Jingfan Fan, Tianyu Fu, Yucong Lin, Hong Song, Junqiang Wang, Jian Yang
{"title":"Pelvic Fracture Reduction Planning via Joint Shape-Intensity Reference","authors":"Xirui Zhao, Deqiang Xiao, Teng Zhang, Long Shao, Danni Ai, Jingfan Fan, Tianyu Fu, Yucong Lin, Hong Song, Junqiang Wang, Jian Yang","doi":"10.1109/tmi.2025.3621670","DOIUrl":"https://doi.org/10.1109/tmi.2025.3621670","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"25 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145295607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1