首页 > 最新文献

IEEE transactions on medical imaging最新文献

英文 中文
Three-Dimensional MRI Reconstruction with 3D Gaussian Representations: Tackling the Undersampling Problem. 三维高斯表示的三维MRI重建:解决欠采样问题。
Pub Date : 2025-12-09 DOI: 10.1109/TMI.2025.3642134
Tengya Peng, Ruyi Zha, Zhen Li, Xiaofeng Liu, Qing Zou

Three-Dimensional Gaussian representation (3DGS) has shown substantial promise in the field of computer vision, but remains unexplored in the field of magnetic resonance imaging (MRI). This study explores its potential for the reconstruction of isotropic resolution 3D MRI from undersampled k-space data. We introduce a novel framework termed 3D Gaussian MRI (3DGSMR), which employs 3D Gaussian distributions as an explicit representation for MR volumes. Experimental evaluations indicate that this method can effectively reconstruct voxelized MR images, achieving a quality on par with that of well-established 3D MRI reconstruction techniques found in the literature. Notably, the 3DGSMR scheme operates under a self-supervised framework, obviating the need for extensive training datasets or prior model training. This approach introduces significant innovations to the domain, notably the adaptation of 3DGS to MRI reconstruction and the novel application of the existing 3DGS methodology to decompose MR signals, which are presented in a complex-valued format.

三维高斯表示(3DGS)在计算机视觉领域显示出巨大的前景,但在磁共振成像(MRI)领域仍未被探索。本研究探索其从欠采样k空间数据重建各向同性分辨率3D MRI的潜力。我们引入了一种新的框架,称为三维高斯MRI (3DGSMR),它采用三维高斯分布作为MR体积的显式表示。实验评估表明,该方法可以有效地重建体素化MR图像,达到与文献中成熟的3D MRI重建技术相当的质量。值得注意的是,3DGSMR方案在自监督框架下运行,避免了对大量训练数据集或先验模型训练的需要。该方法为该领域引入了重大创新,特别是3DGS对MRI重建的适应以及现有3DGS方法的新应用来分解以复值格式呈现的MR信号。
{"title":"Three-Dimensional MRI Reconstruction with 3D Gaussian Representations: Tackling the Undersampling Problem.","authors":"Tengya Peng, Ruyi Zha, Zhen Li, Xiaofeng Liu, Qing Zou","doi":"10.1109/TMI.2025.3642134","DOIUrl":"https://doi.org/10.1109/TMI.2025.3642134","url":null,"abstract":"<p><p>Three-Dimensional Gaussian representation (3DGS) has shown substantial promise in the field of computer vision, but remains unexplored in the field of magnetic resonance imaging (MRI). This study explores its potential for the reconstruction of isotropic resolution 3D MRI from undersampled k-space data. We introduce a novel framework termed 3D Gaussian MRI (3DGSMR), which employs 3D Gaussian distributions as an explicit representation for MR volumes. Experimental evaluations indicate that this method can effectively reconstruct voxelized MR images, achieving a quality on par with that of well-established 3D MRI reconstruction techniques found in the literature. Notably, the 3DGSMR scheme operates under a self-supervised framework, obviating the need for extensive training datasets or prior model training. This approach introduces significant innovations to the domain, notably the adaptation of 3DGS to MRI reconstruction and the novel application of the existing 3DGS methodology to decompose MR signals, which are presented in a complex-valued format.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145717086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OPTIKS: Optimized Gradient Properties Through Timing in K-Space. OPTIKS:通过k空间的时序优化梯度属性。
Pub Date : 2025-12-02 DOI: 10.1109/TMI.2025.3639398
Matthew A McCready, Xiaozhi Cao, Kawin Setsompop, John M Pauly, Adam B Kerr

A customizable method (OPTIKS) for designing fast trajectory-constrained gradient waveforms with optimized time domain properties was developed. Given a specified multidimensional k-space trajectory, the method optimizes traversal speed (and therefore timing) with position along the trajectory. OPTIKS facilitates optimization of objectives dependent on the time domain gradient waveform and the arc-length domain k-space speed. OPTIKS is applied to design waveforms which limit peripheral nerve stimulation (PNS), minimize mechanical resonance excitation, and reduce acoustic noise. A variety of trajectory examples are presented including spirals, circular echo-planar-imaging, and rosettes. Design performance is evaluated based on duration, standardized PNS models, field measurements, gradient coil back-EMF measurements, and calibrated acoustic measurements. We show reductions in back-EMF of up to 94% and field oscillations up to 91.1%, acoustic noise decreases of up to 9.22 dB, and with efficient use of PNS models speed increases of up to 11.4%. The design method implementation is made available as an open source Python package through GitHub (https://github.com/mamccready/optiks).

提出了一种优化时域特性的快速轨迹约束梯度波形设计方法(OPTIKS)。给定一个指定的多维k空间轨迹,该方法优化遍历速度(因此定时)与沿轨迹的位置。OPTIKS便于根据时域梯度波形和弧长域k空间速度对物镜进行优化。OPTIKS应用于设计限制周围神经刺激(PNS)、最小化机械共振激发和降低噪声的波形。各种轨迹的例子,包括螺旋,圆形回波平面成像,和玫瑰。设计性能的评估基于持续时间、标准化PNS模型、现场测量、梯度线圈反电动势测量和校准声学测量。研究表明,反电动势降低高达94%,场振荡降低高达91.1%,噪声降低高达9.22 dB,有效使用PNS模型,速度提高高达11.4%。设计方法的实现是通过GitHub (https://github.com/mamccready/optiks)作为开源Python包提供的。
{"title":"OPTIKS: Optimized Gradient Properties Through Timing in K-Space.","authors":"Matthew A McCready, Xiaozhi Cao, Kawin Setsompop, John M Pauly, Adam B Kerr","doi":"10.1109/TMI.2025.3639398","DOIUrl":"https://doi.org/10.1109/TMI.2025.3639398","url":null,"abstract":"<p><p>A customizable method (OPTIKS) for designing fast trajectory-constrained gradient waveforms with optimized time domain properties was developed. Given a specified multidimensional k-space trajectory, the method optimizes traversal speed (and therefore timing) with position along the trajectory. OPTIKS facilitates optimization of objectives dependent on the time domain gradient waveform and the arc-length domain k-space speed. OPTIKS is applied to design waveforms which limit peripheral nerve stimulation (PNS), minimize mechanical resonance excitation, and reduce acoustic noise. A variety of trajectory examples are presented including spirals, circular echo-planar-imaging, and rosettes. Design performance is evaluated based on duration, standardized PNS models, field measurements, gradient coil back-EMF measurements, and calibrated acoustic measurements. We show reductions in back-EMF of up to 94% and field oscillations up to 91.1%, acoustic noise decreases of up to 9.22 dB, and with efficient use of PNS models speed increases of up to 11.4%. The design method implementation is made available as an open source Python package through GitHub (https://github.com/mamccready/optiks).</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145663017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Chain of Diagnosis Framework for Accurate and Explainable Radiology Report Generation. 生成准确和可解释的放射学报告的诊断链框架。
Pub Date : 2025-12-01 DOI: 10.1109/TMI.2025.3585765
Haibo Jin, Haoxuan Che, Sunan He, Hao Chen

Despite the progress of radiology report generation (RRG), existing works face two challenges: 1) The performances in clinical efficacy are unsatisfactory, especially for lesion attributes description; 2) the generated text lacks explainability, making it difficult for radiologists to trust the results. To address the challenges, we focus on a trustworthy RRG model, which not only generates accurate descriptions of abnormalities, but also provides basis of its predictions. To this end, we propose a framework named chain of diagnosis (CoD), which maintains a chain of diagnostic process for clinically accurate and explainable RRG. It first generates question-answer (QA) pairs via diagnostic conversation to extract key findings, then prompts a large language model with QA diagnoses for accurate generation. To enhance explainability, a diagnosis grounding module is designed to match QA diagnoses and generated sentences, where the diagnoses act as a reference. Moreover, a lesion grounding module is designed to locate abnormalities in the image, further improving the working efficiency of radiologists. To facilitate label-efficient training, we propose an omni-supervised learning strategy with clinical consistency to leverage various types of annotations from different datasets. Our efforts lead to 1) an omni-labeled RRG dataset with QA pairs and lesion boxes; 2) a evaluation tool for assessing the accuracy of reports in describing lesion location and severity; 3) extensive experiments to demonstrate the effectiveness of CoD, where it outperforms both specialist and generalist models consistently on two RRG benchmarks and shows promising explainability by accurately grounding generated sentences to QA diagnoses and images.

尽管放射学报告生成(RRG)取得了进展,但现有工作面临两个挑战:1)临床疗效表现不理想,特别是对病变属性的描述;2)生成的文本缺乏可解释性,使放射科医生难以信任结果。为了应对这些挑战,我们将重点放在一个值得信赖的RRG模型上,该模型不仅可以生成对异常的准确描述,还可以为其预测提供基础。为此,我们提出了一个名为诊断链(CoD)的框架,该框架维持了临床准确和可解释的RRG的诊断过程链。它首先通过诊断对话生成问答(QA)对,以提取关键发现,然后用QA诊断提示一个大型语言模型,以准确生成。为了提高可解释性,设计了诊断基础模块来匹配QA诊断和生成的句子,其中诊断作为参考。同时设计病灶接地模块,定位图像中的异常,进一步提高放射科医生的工作效率。为了促进标签高效训练,我们提出了一种具有临床一致性的全监督学习策略,以利用来自不同数据集的各种类型的注释。我们的努力导致1)一个带有QA对和病变盒的全标记RRG数据集;2)评估报告描述病变位置和严重程度的准确性的评估工具;3)广泛的实验来证明CoD的有效性,在两个RRG基准上,它始终优于专家和通才模型,并通过准确地将生成的句子与QA诊断和图像相结合,显示出有希望的可解释性。
{"title":"A Chain of Diagnosis Framework for Accurate and Explainable Radiology Report Generation.","authors":"Haibo Jin, Haoxuan Che, Sunan He, Hao Chen","doi":"10.1109/TMI.2025.3585765","DOIUrl":"10.1109/TMI.2025.3585765","url":null,"abstract":"<p><p>Despite the progress of radiology report generation (RRG), existing works face two challenges: 1) The performances in clinical efficacy are unsatisfactory, especially for lesion attributes description; 2) the generated text lacks explainability, making it difficult for radiologists to trust the results. To address the challenges, we focus on a trustworthy RRG model, which not only generates accurate descriptions of abnormalities, but also provides basis of its predictions. To this end, we propose a framework named chain of diagnosis (CoD), which maintains a chain of diagnostic process for clinically accurate and explainable RRG. It first generates question-answer (QA) pairs via diagnostic conversation to extract key findings, then prompts a large language model with QA diagnoses for accurate generation. To enhance explainability, a diagnosis grounding module is designed to match QA diagnoses and generated sentences, where the diagnoses act as a reference. Moreover, a lesion grounding module is designed to locate abnormalities in the image, further improving the working efficiency of radiologists. To facilitate label-efficient training, we propose an omni-supervised learning strategy with clinical consistency to leverage various types of annotations from different datasets. Our efforts lead to 1) an omni-labeled RRG dataset with QA pairs and lesion boxes; 2) a evaluation tool for assessing the accuracy of reports in describing lesion location and severity; 3) extensive experiments to demonstrate the effectiveness of CoD, where it outperforms both specialist and generalist models consistently on two RRG benchmarks and shows promising explainability by accurately grounding generated sentences to QA diagnoses and images.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":"4986-4997"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144562423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mutualistic Multi-Network Noisy Label Learning (MMNNLL) Method and Its Application to Transdiagnostic Classification of Bipolar Disorder and Schizophrenia. 互惠多网络噪声标签学习(MMNNLL)方法及其在双相情感障碍和精神分裂症跨诊断分类中的应用。
Pub Date : 2025-12-01 DOI: 10.1109/TMI.2025.3585880
Yuhui Du, Zheng Wang, Ju Niu, Yulong Wang, Godfrey D Pearlson, Vince D Calhoun

The subjective nature of diagnosing mental disorders complicates achieving accurate diagnoses. The complex relationship among disorders further exacerbates this issue, particularly in clinical practice where conditions like bipolar disorder (BP) and schizophrenia (SZ) can present similar clinical symptoms and cognitive impairments. To address these challenges, this paper proposes a mutualistic multi-network noisy label learning (MMNNLL) method, which aims to enhance diagnostic accuracy by leveraging neuroimaging data under the presence of potential clinical diagnosis bias or errors. MMNNLL effectively utilizes multiple deep neural networks (DNNs) for learning from data with noisy labels by maximizing the consistency among DNNs in identifying and utilizing samples with clean and noisy labels. Experimental results on public CIFAR-10 and PathMNIST datasets demonstrate the effectiveness of our method in classifying independent test data across various types and levels of label noise. Additionally, our MMNNLL method significantly outperforms state-of-the-art noisy label learning methods. When applied to brain functional connectivity data from BP and SZ patients, our method identifies two biotypes that show more pronounced group differences, and improved classification accuracy compared to the original clinical categories, using both traditional machine learning and advanced deep learning techniques. In summary, our method effectively addresses the possible inaccuracy in nosology of mental disorders and achieves transdiagnostic classification through robust noisy label learning via multi-network collaboration and competition.

诊断精神障碍的主观性使准确诊断变得复杂。疾病之间的复杂关系进一步加剧了这一问题,特别是在临床实践中,双相情感障碍(BP)和精神分裂症(SZ)等疾病可以表现出类似的临床症状和认知障碍。为了解决这些挑战,本文提出了一种互惠的多网络噪声标签学习(MMNNLL)方法,该方法旨在利用存在潜在临床诊断偏差或错误的神经影像学数据来提高诊断准确性。MMNNLL有效地利用多个深度神经网络(dnn)从带有噪声标签的数据中学习,通过最大化dnn在识别和利用带有干净和噪声标签的样本时的一致性。在公开的CIFAR-10和PathMNIST数据集上的实验结果表明,我们的方法在对不同类型和级别的标签噪声的独立测试数据进行分类方面是有效的。此外,我们的MMNNLL方法显著优于最先进的噪声标签学习方法。当应用于BP和SZ患者的脑功能连接数据时,我们的方法识别出两种生物型,与原始临床分类相比,它们表现出更明显的组差异,并且使用传统的机器学习和先进的深度学习技术,提高了分类精度。综上所述,我们的方法有效地解决了精神障碍分类学中可能存在的不准确性,并通过多网络协作和竞争,通过鲁棒噪声标签学习实现了跨诊断分类。
{"title":"Mutualistic Multi-Network Noisy Label Learning (MMNNLL) Method and Its Application to Transdiagnostic Classification of Bipolar Disorder and Schizophrenia.","authors":"Yuhui Du, Zheng Wang, Ju Niu, Yulong Wang, Godfrey D Pearlson, Vince D Calhoun","doi":"10.1109/TMI.2025.3585880","DOIUrl":"10.1109/TMI.2025.3585880","url":null,"abstract":"<p><p>The subjective nature of diagnosing mental disorders complicates achieving accurate diagnoses. The complex relationship among disorders further exacerbates this issue, particularly in clinical practice where conditions like bipolar disorder (BP) and schizophrenia (SZ) can present similar clinical symptoms and cognitive impairments. To address these challenges, this paper proposes a mutualistic multi-network noisy label learning (MMNNLL) method, which aims to enhance diagnostic accuracy by leveraging neuroimaging data under the presence of potential clinical diagnosis bias or errors. MMNNLL effectively utilizes multiple deep neural networks (DNNs) for learning from data with noisy labels by maximizing the consistency among DNNs in identifying and utilizing samples with clean and noisy labels. Experimental results on public CIFAR-10 and PathMNIST datasets demonstrate the effectiveness of our method in classifying independent test data across various types and levels of label noise. Additionally, our MMNNLL method significantly outperforms state-of-the-art noisy label learning methods. When applied to brain functional connectivity data from BP and SZ patients, our method identifies two biotypes that show more pronounced group differences, and improved classification accuracy compared to the original clinical categories, using both traditional machine learning and advanced deep learning techniques. In summary, our method effectively addresses the possible inaccuracy in nosology of mental disorders and achieves transdiagnostic classification through robust noisy label learning via multi-network collaboration and competition.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":"5014-5026"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144565572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Shape Reconstruction and Registration via a Shared Hybrid Diffeomorphic Flow. 基于共享混合差胚流的关节形状重建与配准。
Pub Date : 2025-12-01 DOI: 10.1109/TMI.2025.3585560
Hengxiang Shi, Ping Wang, Shouhui Zhang, Xiuyang Zhao, Bo Yang, Caiming Zhang

Deep implicit functions (DIFs) effectively represent shapes by using a neural network to map 3D spatial coordinates to scalar values that encode the shape's geometry, but it is difficult to establish correspondences between shapes directly, limiting their use in medical image registration. The recently presented deformation field-based methods achieve implicit templates learning via template field learning with DIFs and deformation field learning, establishing shape correspondence through deformation fields. Although these approaches enable joint learning of shape representation and shape correspondence, the decoupled optimization for template field and deformation field, caused by the absence of deformation annotations lead to a relatively accurate template field but an underoptimized deformation field. In this paper, we propose a novel implicit template learning framework via a shared hybrid diffeomorphic flow (SHDF), which enables shared optimization for deformation and template, contributing to better deformations and shape representation. Specifically, we formulate the signed distance function (SDF, a type of DIFs) as a one-dimensional (1D) integral, unifying dimensions to match the form used in solving ordinary differential equation (ODE) for deformation field learning. Then, SDF in 1D integral form is integrated seamlessly into the deformation field learning. Using a recurrent learning strategy, we frame shape representations and deformations as solving different initial value problems of the same ODE. We also introduce a global smoothness regularization to handle local optima due to limited outside-of-shape data. Experiments on medical datasets show that SHDF outperforms state-of-the-art methods in shape representation and registration.

深度隐式函数(Deep implicit functions, dif)利用神经网络将三维空间坐标映射到编码形状几何的标量值,有效地表示形状,但难以直接建立形状之间的对应关系,限制了其在医学图像配准中的应用。最近提出的基于变形场的方法通过模板场学习和变形场学习实现隐式模板学习,通过变形场建立形状对应关系。虽然这些方法可以实现形状表示和形状对应的联合学习,但由于缺乏变形注释,导致模板场和变形场的解耦优化导致模板场相对准确,但变形场优化不足。在本文中,我们提出了一种新的隐式模板学习框架,该框架通过共享混合微分同构流(SHDF)实现变形和模板的共享优化,有助于更好的变形和形状表示。具体来说,我们将有符号距离函数(SDF, dif的一种)表述为一维(1D)积分,统一维度以匹配用于求解变形场学习的常微分方程(ODE)的形式。然后,将一维积分形式的SDF无缝集成到变形场学习中。使用循环学习策略,我们将形状表示和变形框架为解决相同ODE的不同初值问题。我们还引入了全局平滑正则化来处理由于有限的形状外数据而导致的局部最优。在医学数据集上的实验表明,SHDF在形状表示和配准方面优于最先进的方法。
{"title":"Joint Shape Reconstruction and Registration via a Shared Hybrid Diffeomorphic Flow.","authors":"Hengxiang Shi, Ping Wang, Shouhui Zhang, Xiuyang Zhao, Bo Yang, Caiming Zhang","doi":"10.1109/TMI.2025.3585560","DOIUrl":"10.1109/TMI.2025.3585560","url":null,"abstract":"<p><p>Deep implicit functions (DIFs) effectively represent shapes by using a neural network to map 3D spatial coordinates to scalar values that encode the shape's geometry, but it is difficult to establish correspondences between shapes directly, limiting their use in medical image registration. The recently presented deformation field-based methods achieve implicit templates learning via template field learning with DIFs and deformation field learning, establishing shape correspondence through deformation fields. Although these approaches enable joint learning of shape representation and shape correspondence, the decoupled optimization for template field and deformation field, caused by the absence of deformation annotations lead to a relatively accurate template field but an underoptimized deformation field. In this paper, we propose a novel implicit template learning framework via a shared hybrid diffeomorphic flow (SHDF), which enables shared optimization for deformation and template, contributing to better deformations and shape representation. Specifically, we formulate the signed distance function (SDF, a type of DIFs) as a one-dimensional (1D) integral, unifying dimensions to match the form used in solving ordinary differential equation (ODE) for deformation field learning. Then, SDF in 1D integral form is integrated seamlessly into the deformation field learning. Using a recurrent learning strategy, we frame shape representations and deformations as solving different initial value problems of the same ODE. We also introduce a global smoothness regularization to handle local optima due to limited outside-of-shape data. Experiments on medical datasets show that SHDF outperforms state-of-the-art methods in shape representation and registration.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":"4998-5013"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144562424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guest Editorial Special Issue on Advancements in Foundation Models for Medical Imaging 特邀评论:医学影像基础模型的进展
Pub Date : 2025-10-27 DOI: 10.1109/TMI.2025.3613074
Tianming Liu;Dinggang Shen;Jong Chul Ye;Marleen de Bruijne;Wei Liu
Pretrained on massive datasets, Foundation Models (FMs) are revolutionizing medical imaging by offering scalable and generalizable solutions to longstanding challenges. This Special Issue on Advancements in Foundation Models for Medical Imaging presents FM-related works that explore the potential of FMs to address data scarcity, domain shifts, and multimodal integration across a wide range of medical imaging tasks, including segmentation, diagnosis, reconstruction, and prognosis. The included papers also examine critical concerns such as interpretability, efficiency, benchmarking, and ethics in the adoption of FMs for medical imaging. Collectively, the articles in this Special Issue mark a significant step toward establishing FMs as a cornerstone of next-generation medical imaging AI.
基础模型(FMs)在海量数据集上进行预训练,通过提供可扩展和通用的解决方案来解决长期存在的挑战,正在彻底改变医学成像。本期《医学成像基础模型进展》特刊介绍了与医学成像相关的工作,探讨了医学成像在解决数据短缺、领域转移和跨广泛医学成像任务(包括分割、诊断、重建和预后)的多模式集成方面的潜力。纳入的论文还研究了一些关键问题,如可解释性、效率、基准和医学成像中采用FMs的伦理。总的来说,本期特刊中的文章标志着将FMs作为下一代医学成像人工智能的基石迈出了重要的一步。
{"title":"Guest Editorial Special Issue on Advancements in Foundation Models for Medical Imaging","authors":"Tianming Liu;Dinggang Shen;Jong Chul Ye;Marleen de Bruijne;Wei Liu","doi":"10.1109/TMI.2025.3613074","DOIUrl":"https://doi.org/10.1109/TMI.2025.3613074","url":null,"abstract":"Pretrained on massive datasets, Foundation Models (FMs) are revolutionizing medical imaging by offering scalable and generalizable solutions to longstanding challenges. This Special Issue on Advancements in Foundation Models for Medical Imaging presents FM-related works that explore the potential of FMs to address data scarcity, domain shifts, and multimodal integration across a wide range of medical imaging tasks, including segmentation, diagnosis, reconstruction, and prognosis. The included papers also examine critical concerns such as interpretability, efficiency, benchmarking, and ethics in the adoption of FMs for medical imaging. Collectively, the articles in this Special Issue mark a significant step toward establishing FMs as a cornerstone of next-generation medical imaging AI.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 10","pages":"3894-3897"},"PeriodicalIF":0.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11218696","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145371487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Diffusion Model and Image Foundation Model for Improved Correspondence Matching in Coronary Angiography. 利用弥散模型和图像基础模型改进冠状动脉造影的对应匹配。
Pub Date : 2025-10-20 DOI: 10.1109/TMI.2025.3623507
Lin Zhao, Xin Yu, Yikang Liu, Xiao Chen, Eric Z Chen, Terrence Chen, Shanhui Sun

Accurate correspondence matching in coronary angiography images is crucial for reconstructing 3D coronary artery structures, which is essential for precise diagnosis and treatment planning of coronary artery disease (CAD). Traditional matching methods for natural images often fail to generalize to X-ray images due to inherent differences such as lack of texture, lower contrast, and overlapping structures, compounded by insufficient training data. To address these challenges, we propose a novel pipeline that generates realistic paired coronary angiography images using a diffusion model conditioned on 2D projections of 3D reconstructed meshes from Coronary Computed Tomography Angiography (CCTA), providing high-quality synthetic data for training. Additionally, we employ large-scale image foundation models to guide feature aggregation, enhancing correspondence matching accuracy by focusing on semantically relevant regions and keypoints. Our approach demonstrates superior matching performance on synthetic datasets and effectively generalizes to real-world datasets, offering a practical solution for this task. Furthermore, our work investigates the efficacy of different foundation models in correspondence matching, providing novel insights into leveraging advanced image foundation models for medical imaging applications.

冠状动脉造影图像的精确对应匹配是重建冠状动脉三维结构的关键,对冠状动脉疾病(CAD)的精确诊断和治疗计划至关重要。传统的自然图像匹配方法由于缺乏纹理、对比度较低、结构重叠等固有的差异,再加上训练数据不足,往往不能推广到x射线图像。为了解决这些挑战,我们提出了一种新的管道,该管道使用冠状动脉ct血管造影(CCTA)三维重建网格的二维投影为条件的扩散模型,生成逼真的成对冠状动脉造影图像,为训练提供高质量的合成数据。此外,我们采用大规模的图像基础模型来引导特征聚合,通过关注语义相关的区域和关键点来提高对应匹配的准确性。我们的方法在合成数据集上展示了卓越的匹配性能,并有效地推广到现实世界的数据集,为这项任务提供了一个实用的解决方案。此外,我们的工作研究了不同基础模型在对应匹配中的功效,为利用先进的图像基础模型进行医学成像应用提供了新的见解。
{"title":"Leveraging Diffusion Model and Image Foundation Model for Improved Correspondence Matching in Coronary Angiography.","authors":"Lin Zhao, Xin Yu, Yikang Liu, Xiao Chen, Eric Z Chen, Terrence Chen, Shanhui Sun","doi":"10.1109/TMI.2025.3623507","DOIUrl":"https://doi.org/10.1109/TMI.2025.3623507","url":null,"abstract":"<p><p>Accurate correspondence matching in coronary angiography images is crucial for reconstructing 3D coronary artery structures, which is essential for precise diagnosis and treatment planning of coronary artery disease (CAD). Traditional matching methods for natural images often fail to generalize to X-ray images due to inherent differences such as lack of texture, lower contrast, and overlapping structures, compounded by insufficient training data. To address these challenges, we propose a novel pipeline that generates realistic paired coronary angiography images using a diffusion model conditioned on 2D projections of 3D reconstructed meshes from Coronary Computed Tomography Angiography (CCTA), providing high-quality synthetic data for training. Additionally, we employ large-scale image foundation models to guide feature aggregation, enhancing correspondence matching accuracy by focusing on semantically relevant regions and keypoints. Our approach demonstrates superior matching performance on synthetic datasets and effectively generalizes to real-world datasets, offering a practical solution for this task. Furthermore, our work investigates the efficacy of different foundation models in correspondence matching, providing novel insights into leveraging advanced image foundation models for medical imaging applications.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145338362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FairFedMed: Benchmarking Group Fairness in Federated Medical Imaging with FairLoRA. FairFedMed:利用FairLoRA对联邦医学成像中的群体公平性进行基准测试。
Pub Date : 2025-10-16 DOI: 10.1109/TMI.2025.3622522
Minghan Li, Congcong Wen, Yu Tian, Min Shi, Yan Luo, Hao Huang, Yi Fang, Mengyu Wang

Fairness remains a critical concern in healthcare, where unequal access to services and treatment outcomes can adversely affect patient health. While Federated Learning (FL) presents a collaborative and privacy-preserving approach to model training, ensuring fairness is challenging due to heterogeneous data across institutions, and current research primarily addresses non-medical applications. To fill this gap, we establish the first experimental benchmark for fairness in medical FL, evaluating six representative FL methods across diverse demographic attributes and imaging modalities. We introduce FairFedMed, the first medical FL dataset specifically designed to study group fairness (i.e., consistent performance across demographic groups). It comprises two parts: FairFedMed-Oph, featuring 2D fundus and 3D OCT ophthalmology samples with six demographic attributes; and FairFedMed-Chest, which simulates real cross-institutional FL using subsets of CheXpert and MIMIC-CXR. Together, they support both simulated and real-world FL across diverse medical modalities and demographic groups. Existing FL models often underperform on medical images and overlook fairness across demographic groups. To address this, we propose FairLoRA, a fairness-aware FL framework based on SVD-based low-rank approximation. It customizes singular value matrices per demographic group while sharing singular vectors, ensuring both fairness and efficiency. Experimental results on the FairFedMed dataset demonstrate that FairLoRA not only achieves state-of-the-art performance in medical image classification but also significantly improves fairness across diverse populations. Our code and dataset can be accessible via GitHub link: https://github.com/Harvard-AI-and-Robotics-Lab/FairFedMed.

公平仍然是医疗保健领域的一个关键问题,在医疗保健领域,获得服务和治疗结果的机会不平等可能对患者健康产生不利影响。虽然联邦学习(FL)提出了一种协作和隐私保护的模型训练方法,但由于跨机构的异构数据,确保公平性具有挑战性,目前的研究主要针对非医疗应用。为了填补这一空白,我们建立了医疗FL公平性的第一个实验基准,评估了跨越不同人口统计属性和成像方式的六种代表性FL方法。我们介绍了FairFedMed,这是第一个专门用于研究群体公平性(即跨人口统计群体的一致表现)的医疗FL数据集。它包括两个部分:FairFedMed-Oph,包含二维眼底和三维OCT眼科样本,具有六个人口统计学属性;FairFedMed-Chest,使用CheXpert和MIMIC-CXR的子集模拟真实的跨机构FL。总之,他们支持模拟和现实世界的FL跨越不同的医疗模式和人口群体。现有的FL模型通常在医学图像上表现不佳,并且忽略了人口群体之间的公平性。为了解决这个问题,我们提出了FairLoRA,这是一个基于基于svd的低秩近似的公平感知FL框架。它在共享奇异向量的同时,为每个人口群体定制奇异值矩阵,保证了公平性和效率。在FairFedMed数据集上的实验结果表明,FairLoRA不仅在医学图像分类方面达到了最先进的性能,而且显著提高了不同人群的公平性。我们的代码和数据集可以通过GitHub链接访问:https://github.com/Harvard-AI-and-Robotics-Lab/FairFedMed。
{"title":"FairFedMed: Benchmarking Group Fairness in Federated Medical Imaging with FairLoRA.","authors":"Minghan Li, Congcong Wen, Yu Tian, Min Shi, Yan Luo, Hao Huang, Yi Fang, Mengyu Wang","doi":"10.1109/TMI.2025.3622522","DOIUrl":"https://doi.org/10.1109/TMI.2025.3622522","url":null,"abstract":"<p><p>Fairness remains a critical concern in healthcare, where unequal access to services and treatment outcomes can adversely affect patient health. While Federated Learning (FL) presents a collaborative and privacy-preserving approach to model training, ensuring fairness is challenging due to heterogeneous data across institutions, and current research primarily addresses non-medical applications. To fill this gap, we establish the first experimental benchmark for fairness in medical FL, evaluating six representative FL methods across diverse demographic attributes and imaging modalities. We introduce FairFedMed, the first medical FL dataset specifically designed to study group fairness (i.e., consistent performance across demographic groups). It comprises two parts: FairFedMed-Oph, featuring 2D fundus and 3D OCT ophthalmology samples with six demographic attributes; and FairFedMed-Chest, which simulates real cross-institutional FL using subsets of CheXpert and MIMIC-CXR. Together, they support both simulated and real-world FL across diverse medical modalities and demographic groups. Existing FL models often underperform on medical images and overlook fairness across demographic groups. To address this, we propose FairLoRA, a fairness-aware FL framework based on SVD-based low-rank approximation. It customizes singular value matrices per demographic group while sharing singular vectors, ensuring both fairness and efficiency. Experimental results on the FairFedMed dataset demonstrate that FairLoRA not only achieves state-of-the-art performance in medical image classification but also significantly improves fairness across diverse populations. Our code and dataset can be accessible via GitHub link: https://github.com/Harvard-AI-and-Robotics-Lab/FairFedMed.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145310409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ultrasound Autofocusing: Common Midpoint Phase Error Optimization via Differentiable Beamforming. 超声自动聚焦:基于可微波束形成的常见中点相位误差优化。
Pub Date : 2025-09-09 DOI: 10.1109/TMI.2025.3607875
Walter Simson, Louise Zhuang, Benjamin N Frey, Sergio J Sanabria, Jeremy J Dahl, Dongwoon Hyun

In ultrasound imaging, propagation of an acoustic wavefront through heterogeneous media causes phase aberrations that degrade the coherence of the reflected wavefront, leading to reduced image resolution and contrast. Adaptive imaging techniques attempt to correct this phase aberration and restore coherence, leading to improved focusing of the image. We propose an autofocusing paradigm for aberration correction in ultrasound imaging by fitting an acoustic velocity field to pressure measurements, via optimization of the common midpoint phase error (CMPE), using a straight-ray wave propagation model for beamforming in diffusely scattering media. We show that CMPE induced by heterogeneous acoustic velocity is a robust measure of phase aberration that can be used for acoustic autofocusing. CMPE is optimized iteratively using a differentiable beamforming approach to simultaneously improve the image focus while estimating the acoustic velocity field of the interrogated medium. The approach relies solely on wavefield measurements using a straight-ray integral solution of the two-way time-of-flight without explicit numerical time-stepping models of wave propagation. We demonstrate method performance through in silico simulations, in vitro phantom measurements, and in vivo mammalian models, showing practical applications in distributed aberration quantification, correction, and velocity estimation for medical ultrasound autofocusing.

在超声成像中,声波前通过异质介质的传播会引起相位像差,从而降低反射波前的相干性,导致图像分辨率和对比度降低。自适应成像技术试图纠正这种相位像差并恢复相干性,从而改善图像的聚焦。我们提出了一种自动聚焦模式,通过优化共同中点相位误差(CMPE),将声速场拟合到压力测量值中,从而校正超声成像中的像差,并使用漫射散射介质中波束形成的直线波传播模型。研究表明,由非均匀声速引起的CMPE是一种可靠的相位像差测量方法,可用于声学自动聚焦。利用可微波束形成方法对CMPE进行迭代优化,在估计被测介质声速场的同时提高了图像聚焦。该方法完全依赖于使用双向飞行时间的直线积分解的波场测量,而没有明确的波传播数值时间步进模型。我们通过硅模拟、体外幻影测量和体内哺乳动物模型展示了该方法的性能,展示了在医学超声自动聚焦的分布式像差量化、校正和速度估计方面的实际应用。
{"title":"Ultrasound Autofocusing: Common Midpoint Phase Error Optimization via Differentiable Beamforming.","authors":"Walter Simson, Louise Zhuang, Benjamin N Frey, Sergio J Sanabria, Jeremy J Dahl, Dongwoon Hyun","doi":"10.1109/TMI.2025.3607875","DOIUrl":"10.1109/TMI.2025.3607875","url":null,"abstract":"<p><p>In ultrasound imaging, propagation of an acoustic wavefront through heterogeneous media causes phase aberrations that degrade the coherence of the reflected wavefront, leading to reduced image resolution and contrast. Adaptive imaging techniques attempt to correct this phase aberration and restore coherence, leading to improved focusing of the image. We propose an autofocusing paradigm for aberration correction in ultrasound imaging by fitting an acoustic velocity field to pressure measurements, via optimization of the common midpoint phase error (CMPE), using a straight-ray wave propagation model for beamforming in diffusely scattering media. We show that CMPE induced by heterogeneous acoustic velocity is a robust measure of phase aberration that can be used for acoustic autofocusing. CMPE is optimized iteratively using a differentiable beamforming approach to simultaneously improve the image focus while estimating the acoustic velocity field of the interrogated medium. The approach relies solely on wavefield measurements using a straight-ray integral solution of the two-way time-of-flight without explicit numerical time-stepping models of wave propagation. We demonstrate method performance through in silico simulations, in vitro phantom measurements, and in vivo mammalian models, showing practical applications in distributed aberration quantification, correction, and velocity estimation for medical ultrasound autofocusing.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145031617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LGFFM: A Localized and Globalized Frequency Fusion Model for Ultrasound Image Segmentation. LGFFM:一种局部和全球化的超声图像分割频率融合模型。
Pub Date : 2025-08-19 DOI: 10.1109/TMI.2025.3600327
Xiling Luo, Yi Wang, Le Ou-Yang

Accurate segmentation of ultrasound images plays a critical role in disease screening and diagnosis. Recently, neural network-based methods have garnered significant attention for their potential in improving ultrasound image segmentation. However, these methods still face significant challenges, primarily due to inherent issues in ultrasound images, such as low resolution, speckle noise, and artifacts. Additionally, ultrasound image segmentation encompasses a wide range of scenarios, including organ segmentation (e.g., cardiac and fetal head) and lesion segmentation (e.g., breast cancer and thyroid nodules), making the task highly diverse and complex. Existing methods are often designed for specific segmentation scenarios, which limits their flexibility and ability to meet the diverse needs across various scenarios. To address these challenges, we propose a novel Localized and Globalized Frequency Fusion Model (LGFFM) for ultrasound image segmentation. Specifically, we first design a Parallel Bi-Encoder (PBE) architecture that integrates Local Feature Blocks (LFB) and Global Feature Blocks (GLB) to enhance feature extraction. Additionally, we introduce a Frequency Domain Mapping Module (FDMM) to capture texture information, particularly high-frequency details such as edges. Finally, a Multi-Domain Fusion (MDF) method is developed to effectively integrate features across different domains. We conduct extensive experiments on eight representative public ultrasound datasets across four different types. The results demonstrate that LGFFM outperforms current state-of-the-art methods in both segmentation accuracy and generalization performance.

超声图像的准确分割对疾病的筛查和诊断起着至关重要的作用。近年来,基于神经网络的方法因其在改善超声图像分割方面的潜力而受到广泛关注。然而,这些方法仍然面临着巨大的挑战,主要是由于超声图像固有的问题,如低分辨率、斑点噪声和伪影。此外,超声图像分割包含广泛的场景,包括器官分割(如心脏和胎儿头)和病变分割(如乳腺癌和甲状腺结节),使任务高度多样化和复杂。现有的方法通常是为特定的分割场景设计的,这限制了它们的灵活性和满足不同场景不同需求的能力。为了解决这些挑战,我们提出了一种新的局部和全球化频率融合模型(LGFFM)用于超声图像分割。具体而言,我们首先设计了一个并行双编码器(PBE)架构,该架构集成了局部特征块(LFB)和全局特征块(GLB)来增强特征提取。此外,我们引入了频域映射模块(FDMM)来捕获纹理信息,特别是高频细节,如边缘。最后,提出了一种多域融合(Multi-Domain Fusion, MDF)方法来有效地整合不同域的特征。我们对四种不同类型的八个代表性公共超声数据集进行了广泛的实验。结果表明,LGFFM在分割精度和泛化性能方面都优于当前最先进的方法。
{"title":"LGFFM: A Localized and Globalized Frequency Fusion Model for Ultrasound Image Segmentation.","authors":"Xiling Luo, Yi Wang, Le Ou-Yang","doi":"10.1109/TMI.2025.3600327","DOIUrl":"10.1109/TMI.2025.3600327","url":null,"abstract":"<p><p>Accurate segmentation of ultrasound images plays a critical role in disease screening and diagnosis. Recently, neural network-based methods have garnered significant attention for their potential in improving ultrasound image segmentation. However, these methods still face significant challenges, primarily due to inherent issues in ultrasound images, such as low resolution, speckle noise, and artifacts. Additionally, ultrasound image segmentation encompasses a wide range of scenarios, including organ segmentation (e.g., cardiac and fetal head) and lesion segmentation (e.g., breast cancer and thyroid nodules), making the task highly diverse and complex. Existing methods are often designed for specific segmentation scenarios, which limits their flexibility and ability to meet the diverse needs across various scenarios. To address these challenges, we propose a novel Localized and Globalized Frequency Fusion Model (LGFFM) for ultrasound image segmentation. Specifically, we first design a Parallel Bi-Encoder (PBE) architecture that integrates Local Feature Blocks (LFB) and Global Feature Blocks (GLB) to enhance feature extraction. Additionally, we introduce a Frequency Domain Mapping Module (FDMM) to capture texture information, particularly high-frequency details such as edges. Finally, a Multi-Domain Fusion (MDF) method is developed to effectively integrate features across different domains. We conduct extensive experiments on eight representative public ultrasound datasets across four different types. The results demonstrate that LGFFM outperforms current state-of-the-art methods in both segmentation accuracy and generalization performance.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144884639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on medical imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1