首页 > 最新文献

Computerized Medical Imaging and Graphics最新文献

英文 中文
Attention-aware network with lightness embedding and Hybrid Guided Embedding for laparoscopic image desmoking 基于轻度嵌入和混合引导嵌入的注意力感知网络用于腹腔镜图像去噪。
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-01-01 DOI: 10.1016/j.compmedimag.2025.102691
Ziteng Liu , Chenghong Zhang , Dongdong He , Chenyang Yang , Hao Liu , Wenpeng Gao , Yili Fu
Surgical smoke removal is crucial for enhancing laparoscopic image quality in computer-assisted surgery. While existing methods utilize estimated smoke distribution to address non-homogeneous characteristics, most treat this information merely as prior input and often suffer from over-desmoking artifacts. To address these limitations, this study introduces a desmoking network that reconstructs smoke-free images by explicitly utilizing smoke distribution information. The network comprises two key modules: the Smoke Attention Estimator (SAE) and the Hybrid Guided Embedding (HGE). The SAE generates a smoke attention map via a channel-aware position embedding with lightness prior to improve accuracy. The HGE takes the predicted smoke attention map from the SAE as input and employs convolutional layers along with a novel field transformation method to generate residual terms. By combining these residual terms with the original image, the HGE preserves fine details in smoke-free regions, thereby preventing over-desmoking. Experimental results reveal that the proposed method achieves improvements of at least 3.71% in Peak Signal-to-Noise Ratio (PSNR) and 18.75% in Learned Perceptual Image Patch Similarity compared to state-of-the-art methods on the synthetic dataset, while attaining the lowest Perception-based Image Quality Evaluator score (24.55) on the Cholec80 dataset. It operates at around 174 frames per second, indicating strong real-time processing capability. The network achieves over 40 dB in PSNR for smoke-free regions, excelling in both color restoration and detail preservation. This work is available at https://homepage.hit.edu.cn/wpgao?lang=en.
在计算机辅助手术中,手术除烟是提高腹腔镜图像质量的关键。虽然现有的方法利用估计的烟雾分布来解决非均匀特征,但大多数方法仅将此信息视为先前的输入,并且经常遭受过度脱烟的影响。为了解决这些限制,本研究引入了一个去吸烟网络,通过明确地利用烟雾分布信息重建无烟图像。该网络包括两个关键模块:烟雾注意估计器(SAE)和混合制导嵌入(HGE)。在提高精度之前,SAE通过通道感知位置嵌入和亮度来生成烟雾注意图。HGE将SAE预测的烟雾注意图作为输入,并采用卷积层和一种新的场变换方法来生成残差项。通过将这些残差项与原始图像相结合,HGE保留了无烟区的精细细节,从而防止过度吸烟。实验结果表明,与最先进的方法相比,该方法在合成数据集上的峰值信噪比(PSNR)和学习感知图像Patch相似度分别提高了3.71%和18.75%,而在Cholec80数据集上获得了最低的基于感知的图像质量评估器评分(24.55)。它的运行速度约为每秒174帧,显示出强大的实时处理能力。该网络在无烟区的PSNR超过40 dB,在颜色恢复和细节保存方面都表现出色。这项工作可在https://homepage.hit.edu.cn/wpgao?lang=en上获得。
{"title":"Attention-aware network with lightness embedding and Hybrid Guided Embedding for laparoscopic image desmoking","authors":"Ziteng Liu ,&nbsp;Chenghong Zhang ,&nbsp;Dongdong He ,&nbsp;Chenyang Yang ,&nbsp;Hao Liu ,&nbsp;Wenpeng Gao ,&nbsp;Yili Fu","doi":"10.1016/j.compmedimag.2025.102691","DOIUrl":"10.1016/j.compmedimag.2025.102691","url":null,"abstract":"<div><div>Surgical smoke removal is crucial for enhancing laparoscopic image quality in computer-assisted surgery. While existing methods utilize estimated smoke distribution to address non-homogeneous characteristics, most treat this information merely as prior input and often suffer from over-desmoking artifacts. To address these limitations, this study introduces a desmoking network that reconstructs smoke-free images by explicitly utilizing smoke distribution information. The network comprises two key modules: the Smoke Attention Estimator (SAE) and the Hybrid Guided Embedding (HGE). The SAE generates a smoke attention map via a channel-aware position embedding with lightness prior to improve accuracy. The HGE takes the predicted smoke attention map from the SAE as input and employs convolutional layers along with a novel field transformation method to generate residual terms. By combining these residual terms with the original image, the HGE preserves fine details in smoke-free regions, thereby preventing over-desmoking. Experimental results reveal that the proposed method achieves improvements of at least 3.71% in Peak Signal-to-Noise Ratio (PSNR) and 18.75% in Learned Perceptual Image Patch Similarity compared to state-of-the-art methods on the synthetic dataset, while attaining the lowest Perception-based Image Quality Evaluator score (24.55) on the Cholec80 dataset. It operates at around 174 frames per second, indicating strong real-time processing capability. The network achieves over 40 dB in PSNR for smoke-free regions, excelling in both color restoration and detail preservation. This work is available at <span><span>https://homepage.hit.edu.cn/wpgao?lang=en</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"127 ","pages":"Article 102691"},"PeriodicalIF":4.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145866672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Defect-adaptive landmark detection in pelvis CT images via personalized structure-aware learning 基于个性化结构感知学习的骨盆CT图像缺陷自适应地标检测
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-01-01 DOI: 10.1016/j.compmedimag.2025.102693
Xirui Zhao , Deqiang Xiao , Teng Zhang , Jingfan Fan , Danni Ai , Tianyu Fu , Yucong Lin , Long Shao , Hong Song , Junqiang Wang , Jian Yang
Accurate localization of anatomical landmarks from pelvic CT images is crucial for preoperative planning in orthopedic procedures. However, existing automatic methods often underperform when facing defective bone structures, which are common in clinical scenarios involving trauma, resection, or severe degeneration. To address this challenge, we propose DADNet, a defect-adaptive detection network that incorporates personalized structural priors to achieve accurate and robust landmark detection in defective pelvis CT images. DADNet first constructs a structure-aware soft prior map that encodes the spatial distribution of landmarks based on the individual bone anatomy. This prior map, which highlights landmark-related regions, is generated via a dedicated convolutional module followed by logarithmic transformation. Guided by this soft prior, we extract local patches around the candidate regions and performs landmark regression using a patch-based context-aware detection network. To further enhance detection robustness in defective regions, we introduce a bone-aware detection loss that modulates the prediction confidence based on bone structures. The modulation weight is dynamically adjusted during training via a sigmoid scheduler, enabling progressive adaptation from coarse to fine structural constraints. We evaluate DADNet on both public and private datasets featuring varying degrees of pelvic defects. Our approach achieves an average detection error of 1.252 ± 0.075 mm on severely defective cases, significantly outperforming existing methods. The proposed framework demonstrates strong adaptability to anatomical variability and structural incompleteness, offering a promising tool for accurate and robust landmark detection in challenging clinical cases.
骨盆CT图像中解剖标志的准确定位对于骨科手术的术前规划至关重要。然而,当面对有缺陷的骨结构时,现有的自动方法往往表现不佳,这在涉及创伤、切除或严重退变的临床场景中很常见。为了解决这一挑战,我们提出了DADNet,这是一种缺陷自适应检测网络,它结合了个性化的结构先验,可以在骨盆缺陷CT图像中实现准确和鲁棒的地标检测。DADNet首先构建了一个结构感知的软先验地图,该地图基于个体骨骼解剖结构对地标的空间分布进行编码。该先验地图突出了与地标相关的区域,通过专用的卷积模块生成,然后进行对数变换。在此软先验的指导下,我们提取候选区域周围的局部补丁,并使用基于补丁的上下文感知检测网络进行地标回归。为了进一步增强缺陷区域的检测鲁棒性,我们引入了基于骨结构调节预测置信度的骨感知检测损失。在训练过程中,通过s型调度器动态调整调制权,实现从粗到细的结构约束的渐进适应。我们在具有不同程度骨盆缺陷的公共和私人数据集上评估DADNet。该方法对严重缺陷病例的平均检测误差为1.252±0.075 mm,明显优于现有方法。所提出的框架对解剖变异性和结构不完整性具有很强的适应性,为具有挑战性的临床病例提供了准确而稳健的地标检测工具。
{"title":"Defect-adaptive landmark detection in pelvis CT images via personalized structure-aware learning","authors":"Xirui Zhao ,&nbsp;Deqiang Xiao ,&nbsp;Teng Zhang ,&nbsp;Jingfan Fan ,&nbsp;Danni Ai ,&nbsp;Tianyu Fu ,&nbsp;Yucong Lin ,&nbsp;Long Shao ,&nbsp;Hong Song ,&nbsp;Junqiang Wang ,&nbsp;Jian Yang","doi":"10.1016/j.compmedimag.2025.102693","DOIUrl":"10.1016/j.compmedimag.2025.102693","url":null,"abstract":"<div><div>Accurate localization of anatomical landmarks from pelvic CT images is crucial for preoperative planning in orthopedic procedures. However, existing automatic methods often underperform when facing defective bone structures, which are common in clinical scenarios involving trauma, resection, or severe degeneration. To address this challenge, we propose DADNet, a defect-adaptive detection network that incorporates personalized structural priors to achieve accurate and robust landmark detection in defective pelvis CT images. DADNet first constructs a structure-aware soft prior map that encodes the spatial distribution of landmarks based on the individual bone anatomy. This prior map, which highlights landmark-related regions, is generated via a dedicated convolutional module followed by logarithmic transformation. Guided by this soft prior, we extract local patches around the candidate regions and performs landmark regression using a patch-based context-aware detection network. To further enhance detection robustness in defective regions, we introduce a bone-aware detection loss that modulates the prediction confidence based on bone structures. The modulation weight is dynamically adjusted during training via a sigmoid scheduler, enabling progressive adaptation from coarse to fine structural constraints. We evaluate DADNet on both public and private datasets featuring varying degrees of pelvic defects. Our approach achieves an average detection error of 1.252 ± 0.075 mm on severely defective cases, significantly outperforming existing methods. The proposed framework demonstrates strong adaptability to anatomical variability and structural incompleteness, offering a promising tool for accurate and robust landmark detection in challenging clinical cases.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"127 ","pages":"Article 102693"},"PeriodicalIF":4.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spectral-X: Latent prior enhanced spectral CT restoration with mamba-assisted X-net spectral - x:曼巴辅助X-net的潜在先验增强光谱CT恢复
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-01-01 DOI: 10.1016/j.compmedimag.2025.102696
Yikun Zhang , Jiashun Wang , Xi Wang , Xu Ji , Kai Chen , Jian Yang , Yinsheng Li , Yang Chen
Compared with conventional computed tomography (CT), spectral CT can simultaneously visualize internal structures and characterize the material composition of scanned objects by acquiring data at different energy spectra. Photon-counting CT (PCCT) and multi-source CT (MSCT) are two promising implementations of spectral CT. Besides, radiation exposure remains a long-standing concern in CT imaging, as excessive X-ray exposure may lead to genetic and cellular damage. For PCCT and MSCT, the radiation dose can be reduced by lowering the tube current and adopting complementary limited-view scanning, respectively. To mitigate the noise and artifacts induced by low-dose acquisition protocols, this paper proposes a Mamba-assisted X-Net leveraging latent priors for spectral CT, termed Spectral-X. First, considering the intrinsic characteristics of spectral CT, Spectral-X exploits the latent representation of the enhanced full-spectrum prior image to facilitate the restoration of multi-energy CT (MECT). Second, Spectral-X employs an X-shaped network with feature fusion blocks to adaptively capture and leverage multi-scale prior information in the latent space. Third, Spectral-X integrates a novel all-around Mamba mechanism that can efficiently model long-range dependencies, thereby enhancing the performance of the image restoration backbone network. Spectral-X is evaluated on both PCCT denoising and limited-view MSCT restoration tasks, and the experimental results demonstrate that Spectral-X achieves state-of-the-art performance in noise suppression, artifact removal, and structural restoration.
与传统的计算机断层扫描(CT)相比,光谱CT通过获取不同能谱的数据,可以同时显示被扫描物体的内部结构和表征其物质组成。光子计数CT (PCCT)和多源CT (MSCT)是两种很有前途的光谱CT实现方法。此外,辐射暴露仍然是CT成像长期关注的问题,因为过度的x射线暴露可能导致遗传和细胞损伤。对于PCCT和MSCT,分别可以通过降低管电流和采用互补的限视扫描来降低辐射剂量。为了减轻低剂量采集方案引起的噪声和伪影,本文提出了一种利用光谱CT潜在先验的mamba辅助X-Net,称为spectral - x。首先,考虑到光谱CT的固有特性,spectral - x利用增强的全光谱先验图像的潜在表示来促进多能CT (MECT)的恢复。其次,Spectral-X采用带有特征融合块的x形网络自适应捕获和利用潜在空间中的多尺度先验信息。第三,Spectral-X集成了一种新颖的全方位曼巴机制,可以有效地建模远程依赖关系,从而提高了图像恢复骨干网的性能。Spectral-X在PCCT去噪和有限视野MSCT恢复任务上进行了评估,实验结果表明,Spectral-X在噪声抑制、伪影去除和结构恢复方面达到了最先进的性能。
{"title":"Spectral-X: Latent prior enhanced spectral CT restoration with mamba-assisted X-net","authors":"Yikun Zhang ,&nbsp;Jiashun Wang ,&nbsp;Xi Wang ,&nbsp;Xu Ji ,&nbsp;Kai Chen ,&nbsp;Jian Yang ,&nbsp;Yinsheng Li ,&nbsp;Yang Chen","doi":"10.1016/j.compmedimag.2025.102696","DOIUrl":"10.1016/j.compmedimag.2025.102696","url":null,"abstract":"<div><div>Compared with conventional computed tomography (CT), spectral CT can simultaneously visualize internal structures and characterize the material composition of scanned objects by acquiring data at different energy spectra. Photon-counting CT (PCCT) and multi-source CT (MSCT) are two promising implementations of spectral CT. Besides, radiation exposure remains a long-standing concern in CT imaging, as excessive X-ray exposure may lead to genetic and cellular damage. For PCCT and MSCT, the radiation dose can be reduced by lowering the tube current and adopting complementary limited-view scanning, respectively. To mitigate the noise and artifacts induced by low-dose acquisition protocols, this paper proposes a Mamba-assisted X-Net leveraging latent priors for spectral CT, termed Spectral-X. First, considering the intrinsic characteristics of spectral CT, Spectral-X exploits the latent representation of the enhanced full-spectrum prior image to facilitate the restoration of multi-energy CT (MECT). Second, Spectral-X employs an X-shaped network with feature fusion blocks to adaptively capture and leverage multi-scale prior information in the latent space. Third, Spectral-X integrates a novel all-around Mamba mechanism that can efficiently model long-range dependencies, thereby enhancing the performance of the image restoration backbone network. Spectral-X is evaluated on both PCCT denoising and limited-view MSCT restoration tasks, and the experimental results demonstrate that Spectral-X achieves state-of-the-art performance in noise suppression, artifact removal, and structural restoration.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"127 ","pages":"Article 102696"},"PeriodicalIF":4.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporally-aware diffusion model for brain progression modelling with bidirectional temporal regularisation 双向时间正则化脑进程建模的时间感知扩散模型。
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-01-01 DOI: 10.1016/j.compmedimag.2025.102688
Mattia Litrico , Francesco Guarnera , Mario Valerio Giuffrida , Daniele Ravì , Sebastiano Battiato
Generating realistic MRIs to accurately predict future changes in the structure of brain is an invaluable tool for clinicians in assessing clinical outcomes and analysing the disease progression at the patient level. However, current existing methods present some limitations: (i) some approaches fail to explicitly capture the relationship between structural changes and time intervals, especially when trained on age-imbalanced datasets; (ii) others rely only on scan interpolation, which lack clinical utility, as they generate intermediate images between timepoints rather than future pathological progression; and (iii) most approaches rely on 2D slice-based architectures, thereby disregarding full 3D anatomical context, which is essential for accurate longitudinal predictions. We propose a 3D Temporally-Aware Diffusion Model (TADM-3D), which accurately predicts brain progression on MRI volumes. To better model the relationship between time interval and brain changes, TADM-3D uses a pre-trained Brain-Age Estimator (BAE) that guides the diffusion model in the generation of MRIs that accurately reflect the expected age difference between baseline and generated follow-up scans. Additionally, to further improve the temporal awareness of TADM-3D, we propose the Back-In-Time Regularisation (BITR), by training TADM-3D to predict bidirectionally from the baseline to follow-up (forward), as well as from the follow-up to baseline (backward). Although predicting past scans has limited clinical applications, this regularisation helps the model generate temporally more accurate scans. We train and evaluate TADM-3D on the OASIS-3 dataset, and we validate the generalisation performance on an external test set from the NACC dataset. The code is available at https://github.com/MattiaLitrico/TADM-3D.
生成真实的核磁共振成像以准确预测大脑结构的未来变化是临床医生评估临床结果和分析患者水平疾病进展的宝贵工具。然而,目前现有的方法存在一些局限性:(i)一些方法不能明确地捕捉结构变化与时间间隔之间的关系,特别是在年龄不平衡数据集上进行训练时;(ii)其他仅依赖于扫描插值,缺乏临床效用,因为它们生成的是时间点之间的中间图像,而不是未来的病理进展;(iii)大多数方法依赖于基于2D切片的架构,因此忽略了完整的3D解剖背景,这对于准确的纵向预测至关重要。我们提出了一个3D时间感知扩散模型(TADM-3D),它可以准确地预测MRI体积上的大脑进展。为了更好地模拟时间间隔和大脑变化之间的关系,TADM-3D使用了一个预先训练的脑年龄估计器(BAE),它在生成mri时指导扩散模型,准确反映基线和生成的后续扫描之间的预期年龄差异。此外,为了进一步提高TADM-3D的时间意识,我们提出了Back-In-Time Regularisation (BITR),通过训练TADM-3D从基线到后续(向前)以及从后续到基线(向后)的双向预测。尽管预测过去的扫描具有有限的临床应用,但这种规范化有助于模型生成更准确的扫描。我们在OASIS-3数据集上训练和评估了TADM-3D,并在NACC数据集的外部测试集上验证了泛化性能。代码可在https://github.com/MattiaLitrico/TADM-3D上获得。
{"title":"Temporally-aware diffusion model for brain progression modelling with bidirectional temporal regularisation","authors":"Mattia Litrico ,&nbsp;Francesco Guarnera ,&nbsp;Mario Valerio Giuffrida ,&nbsp;Daniele Ravì ,&nbsp;Sebastiano Battiato","doi":"10.1016/j.compmedimag.2025.102688","DOIUrl":"10.1016/j.compmedimag.2025.102688","url":null,"abstract":"<div><div>Generating realistic MRIs to accurately predict future changes in the structure of brain is an invaluable tool for clinicians in assessing clinical outcomes and analysing the disease progression at the patient level. However, current existing methods present some limitations: (i) some approaches fail to explicitly capture the relationship between structural changes and time intervals, especially when trained on age-imbalanced datasets; (ii) others rely only on scan interpolation, which lack clinical utility, as they generate intermediate images between timepoints rather than future pathological progression; and (iii) most approaches rely on 2D slice-based architectures, thereby disregarding full 3D anatomical context, which is essential for accurate longitudinal predictions. We propose a 3D Temporally-Aware Diffusion Model (TADM-3D), which accurately predicts brain progression on MRI volumes. To better model the relationship between time interval and brain changes, TADM-3D uses a pre-trained <em>Brain-Age Estimator</em> (BAE) that guides the diffusion model in the generation of MRIs that accurately reflect the expected age difference between baseline and generated follow-up scans. Additionally, to further improve the temporal awareness of TADM-3D, we propose the <em>Back-In-Time Regularisation</em> (BITR), by training TADM-3D to predict bidirectionally from the baseline to follow-up (forward), as well as from the follow-up to baseline (backward). Although predicting past scans has limited clinical applications, this regularisation helps the model generate temporally more accurate scans. We train and evaluate TADM-3D on the OASIS-3 dataset, and we validate the generalisation performance on an external test set from the NACC dataset. The code is available at <span><span>https://github.com/MattiaLitrico/TADM-3D</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"127 ","pages":"Article 102688"},"PeriodicalIF":4.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145866649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wholistic report generation for Breast ultrasound using LangChain 使用LangChain生成乳腺超声整体报告
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-01-01 DOI: 10.1016/j.compmedimag.2025.102697
Jaeyoung Huh , Hye Shin Ahn , Hyun Jeong Park , Jong Chul Ye
Breast ultrasound (BUS) is a vital imaging technique for detecting and characterizing breast abnormalities. Generating comprehensive BUS reports typically requires integrating multiple image views and patient information, which can be time-consuming for clinicians. This study explores the feasibility of a modular, AI-assisted framework to support BUS report generation, focusing on system integration. We developed a suite of classification networks for image analysis, coordinated via LangChain with Large Language Models (LLMs), to generate structured and clinically meaningful reports. A Retrieval-Augmented Generation (RAG) component allows the framework to incorporate prior patient information, enabling context-aware and personalized report generation. The system demonstrates the practical integration of existing image-analysis models and language-generation tools within a clinical workflow. Experimental evaluations show that the integrated framework produces consistent and clinically interpretable reports, which align well with radiologists’ assessments. These results suggest that the proposed approach provides a feasible, modular, and extensible solution for semi-automated BUS report generation, offering a foundation for further refinement and potential clinical deployment.
乳房超声(BUS)是一种重要的成像技术,用于检测和表征乳房异常。生成全面的BUS报告通常需要集成多个图像视图和患者信息,这对临床医生来说可能很耗时。本研究探讨了模块化、人工智能辅助框架的可行性,以支持BUS报告生成,重点是系统集成。我们开发了一套用于图像分析的分类网络,通过LangChain和大型语言模型(LLMs)进行协调,生成结构化和有临床意义的报告。检索增强生成(retrieve - augmented Generation, RAG)组件允许框架合并先前的患者信息,从而支持上下文感知和个性化的报告生成。该系统演示了临床工作流程中现有图像分析模型和语言生成工具的实际集成。实验评估表明,综合框架产生一致的和临床可解释的报告,这与放射科医生的评估很好地一致。这些结果表明,所提出的方法为半自动化的BUS报告生成提供了一个可行的、模块化的和可扩展的解决方案,为进一步改进和潜在的临床部署奠定了基础。
{"title":"Wholistic report generation for Breast ultrasound using LangChain","authors":"Jaeyoung Huh ,&nbsp;Hye Shin Ahn ,&nbsp;Hyun Jeong Park ,&nbsp;Jong Chul Ye","doi":"10.1016/j.compmedimag.2025.102697","DOIUrl":"10.1016/j.compmedimag.2025.102697","url":null,"abstract":"<div><div>Breast ultrasound (BUS) is a vital imaging technique for detecting and characterizing breast abnormalities. Generating comprehensive BUS reports typically requires integrating multiple image views and patient information, which can be time-consuming for clinicians. This study explores the feasibility of a modular, AI-assisted framework to support BUS report generation, focusing on system integration. We developed a suite of classification networks for image analysis, coordinated via LangChain with Large Language Models (LLMs), to generate structured and clinically meaningful reports. A Retrieval-Augmented Generation (RAG) component allows the framework to incorporate prior patient information, enabling context-aware and personalized report generation. The system demonstrates the practical integration of existing image-analysis models and language-generation tools within a clinical workflow. Experimental evaluations show that the integrated framework produces consistent and clinically interpretable reports, which align well with radiologists’ assessments. These results suggest that the proposed approach provides a feasible, modular, and extensible solution for semi-automated BUS report generation, offering a foundation for further refinement and potential clinical deployment.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"127 ","pages":"Article 102697"},"PeriodicalIF":4.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computerized medical imaging and graphics best paper award 2024. 计算机医学成像和图形学最佳论文奖2024。
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-01-01 Epub Date: 2025-12-04 DOI: 10.1016/j.compmedimag.2025.102683
Xiaoyin Xu, Stephen Tc Wong
{"title":"Computerized medical imaging and graphics best paper award 2024.","authors":"Xiaoyin Xu, Stephen Tc Wong","doi":"10.1016/j.compmedimag.2025.102683","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2025.102683","url":null,"abstract":"","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"127 ","pages":"102683"},"PeriodicalIF":4.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146020430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computerized medical imaging and graphics best paper award 2024 计算机医学成像和图形学最佳论文奖2024。
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-01-01 DOI: 10.1016/j.compmedimag.2025.102683
Xiaoyin Xu, Stephen TC Wong
{"title":"Computerized medical imaging and graphics best paper award 2024","authors":"Xiaoyin Xu,&nbsp;Stephen TC Wong","doi":"10.1016/j.compmedimag.2025.102683","DOIUrl":"10.1016/j.compmedimag.2025.102683","url":null,"abstract":"","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"127 ","pages":"Article 102683"},"PeriodicalIF":4.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145726624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on X-ray coronary artery branches instance segmentation and matching task x射线冠状动脉分支实例分割与匹配任务研究
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-26 DOI: 10.1016/j.compmedimag.2025.102681
Xiaodong Zhou , Huibin Wang
In the task of 3D reconstruction of X-ray coronary artery, matching vessel branches in different viewpoints is a challenging task. In this study, this task is transformed into the process of vessel branches instance segmentation and then matching branches of the same color, and an instance segmentation network (YOLO-CAVBIS) is proposed specifically for deformed and dynamic vessels. Firstly, since the left and right coronary artery branches are not easy to distinguish, a coronary artery classification dataset is produced and the left and right coronary artery arteries are classified using the YOLOv8-cls classification model, and then the classified images are fed into two parallel YOLO-CAVBIS networks for coronary artery branches instance segmentation. Finally, the branches with the same color of branches in different viewpoints are matched. The experimental results show that the accuracy of the coronary artery classification model can reach 100%, and the mAP50 of the proposed left coronary branches instance segmentation model reaches 98.4%, and the mAP50 of the proposed right coronary branches instance segmentation model reaches 99.4%. In terms of extracting deformation and dynamic vascular features, our proposed YOLO-CAVBIS network demonstrates greater specificity and superiority compared to other instance segmentation networks, and can be used as a baseline model for the task of coronary artery branches instance segmentation. Code repository: https://gitee.com/zaleman/ca_instance_segmentation, https://github.com/zaleman/ca_instance_segmentation.
在x线冠状动脉三维重建任务中,不同视点的血管分支匹配是一项具有挑战性的任务。在本研究中,将该任务转化为血管分支实例分割和相同颜色分支匹配的过程,并提出了针对变形血管和动态血管的实例分割网络(YOLO-CAVBIS)。首先,针对左右冠状动脉分支不易区分的问题,建立冠状动脉分类数据集,利用YOLOv8-cls分类模型对左右冠状动脉进行分类,然后将分类后的图像送入两个并行的yolov8 - cavbis网络进行冠状动脉分支实例分割。最后,对不同视点分支颜色相同的分支进行匹配。实验结果表明,冠状动脉分类模型的准确率可以达到100%,所提出的左冠状动脉分支实例分割模型的mAP50达到98.4%,所提出的右冠状动脉分支实例分割模型的mAP50达到99.4%。在提取血管形变和血管动态特征方面,与其他实例分割网络相比,我们提出的YOLO-CAVBIS网络具有更大的特异性和优越性,可以作为冠状动脉分支实例分割任务的基线模型。代码存储库:https://gitee.com/zaleman/ca_instance_segmentation, https://github.com/zaleman/ca_instance_segmentation。
{"title":"Research on X-ray coronary artery branches instance segmentation and matching task","authors":"Xiaodong Zhou ,&nbsp;Huibin Wang","doi":"10.1016/j.compmedimag.2025.102681","DOIUrl":"10.1016/j.compmedimag.2025.102681","url":null,"abstract":"<div><div>In the task of 3D reconstruction of X-ray coronary artery, matching vessel branches in different viewpoints is a challenging task. In this study, this task is transformed into the process of vessel branches instance segmentation and then matching branches of the same color, and an instance segmentation network (YOLO-CAVBIS) is proposed specifically for deformed and dynamic vessels. Firstly, since the left and right coronary artery branches are not easy to distinguish, a coronary artery classification dataset is produced and the left and right coronary artery arteries are classified using the YOLOv8-cls classification model, and then the classified images are fed into two parallel YOLO-CAVBIS networks for coronary artery branches instance segmentation. Finally, the branches with the same color of branches in different viewpoints are matched. The experimental results show that the accuracy of the coronary artery classification model can reach 100%, and the mAP50 of the proposed left coronary branches instance segmentation model reaches 98.4%, and the mAP50 of the proposed right coronary branches instance segmentation model reaches 99.4%. In terms of extracting deformation and dynamic vascular features, our proposed YOLO-CAVBIS network demonstrates greater specificity and superiority compared to other instance segmentation networks, and can be used as a baseline model for the task of coronary artery branches instance segmentation. Code repository: <span><span>https://gitee.com/zaleman/ca_instance_segmentation</span><svg><path></path></svg></span>, <span><span>https://github.com/zaleman/ca_instance_segmentation</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"128 ","pages":"Article 102681"},"PeriodicalIF":4.9,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised medical image classification via feature-level multi-scale consistency and adversarial training 基于特征级多尺度一致性和对抗训练的半监督医学图像分类
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-26 DOI: 10.1016/j.compmedimag.2025.102695
Li Shiyan, Wang Shuqin, Gu Xin, Sun Debing
In recent years, semi-supervised learning (SSL) has attracted increasing attention in medical image analysis, showing great potential in scenarios with limited annotations. However, existing consistency regularization methods suffer from several limitations: overly uniform constraints at the output layer, lack of interaction within adversarial strategies, and reliance on external sample pools for sample estimation, which together lead to insufficient use of feature-level information and unstable training. To address these challenges, this paper proposes a novel semi-supervised framework, termed Feature-level multi-scale Consistency and Adversarial Training (FCAT). A multi-scale feature-level consistency mechanism is introduced to capture hierarchical structural representations through cross-level feature fusion, enabling robust feature alignment without relying on external sample pools. To overcome the limitation of unidirectional adversarial training, a bidirectional feature perturbation strategy is designed under a teacher–student collaboration scheme, where both models generate perturbations from their own gradients and enforce mutual consistency. In addition, an intrinsic evaluation mechanism based on entropy and complementary confidence is developed to rank unlabeled samples according to their information content, guiding the training process toward informative hard samples while reducing overfitting to trivial ones. Experiments on the balanced Pneumonia Chest X-ray and NCT-CRC-HE histopathology datasets, as well as the imbalanced ISIC 2019 dermoscopic skin lesion dataset, demonstrate that our FCAT achieves competitive performance and strong generalization across diverse imaging modalities and data distributions.
近年来,半监督学习(semi-supervised learning, SSL)在医学图像分析中受到越来越多的关注,在标注有限的场景中显示出巨大的潜力。然而,现有的一致性正则化方法存在一些局限性:输出层过于统一的约束,对抗策略之间缺乏交互,以及依赖外部样本池进行样本估计,这些都导致特征级信息的使用不足和训练不稳定。为了解决这些挑战,本文提出了一种新的半监督框架,称为特征级多尺度一致性和对抗训练(FCAT)。引入了一种多尺度特征级一致性机制,通过跨级别特征融合捕获分层结构表示,实现了不依赖外部样本池的鲁棒特征对齐。为了克服单向对抗训练的局限性,在师生协作方案下设计了双向特征摄动策略,两个模型从各自的梯度产生摄动,并实现相互一致性。此外,开发了一种基于熵和互补置信度的内在评价机制,根据未标记样本的信息内容对其进行排序,引导训练过程向信息丰富的硬样本发展,同时减少对平凡样本的过拟合。在平衡的肺炎胸片和NCT-CRC-HE组织病理学数据集以及不平衡的ISIC 2019皮肤镜皮肤病变数据集上进行的实验表明,我们的FCAT在不同的成像方式和数据分布中具有竞争力的性能和很强的泛化性。
{"title":"Semi-supervised medical image classification via feature-level multi-scale consistency and adversarial training","authors":"Li Shiyan,&nbsp;Wang Shuqin,&nbsp;Gu Xin,&nbsp;Sun Debing","doi":"10.1016/j.compmedimag.2025.102695","DOIUrl":"10.1016/j.compmedimag.2025.102695","url":null,"abstract":"<div><div>In recent years, semi-supervised learning (SSL) has attracted increasing attention in medical image analysis, showing great potential in scenarios with limited annotations. However, existing consistency regularization methods suffer from several limitations: overly uniform constraints at the output layer, lack of interaction within adversarial strategies, and reliance on external sample pools for sample estimation, which together lead to insufficient use of feature-level information and unstable training. To address these challenges, this paper proposes a novel semi-supervised framework, termed Feature-level multi-scale Consistency and Adversarial Training (FCAT). A multi-scale feature-level consistency mechanism is introduced to capture hierarchical structural representations through cross-level feature fusion, enabling robust feature alignment without relying on external sample pools. To overcome the limitation of unidirectional adversarial training, a bidirectional feature perturbation strategy is designed under a teacher–student collaboration scheme, where both models generate perturbations from their own gradients and enforce mutual consistency. In addition, an intrinsic evaluation mechanism based on entropy and complementary confidence is developed to rank unlabeled samples according to their information content, guiding the training process toward informative hard samples while reducing overfitting to trivial ones. Experiments on the balanced Pneumonia Chest X-ray and NCT-CRC-HE histopathology datasets, as well as the imbalanced ISIC 2019 dermoscopic skin lesion dataset, demonstrate that our FCAT achieves competitive performance and strong generalization across diverse imaging modalities and data distributions.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"128 ","pages":"Article 102695"},"PeriodicalIF":4.9,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UltraBoneUDF: Self-supervised bone surface reconstruction from ultrasound based on neural unsigned distance functions UltraBoneUDF:基于神经无符号距离函数的超声自监督骨表面重建
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-19 DOI: 10.1016/j.compmedimag.2025.102690
Luohong Wu , Matthias Seibold , Nicola A. Cavalcanti , Giuseppe Loggia , Lisa Reissner , Bastian Sigrist , Jonas Hein , Lilian Calvet , Arnd Viehöfer , Philipp Fürnstahl

Background:

Bone surface reconstruction is an essential component of computer-assisted orthopedic surgery (CAOS), forming the foundation for both preoperative planning and intraoperative guidance. Compared to traditional imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI), ultrasound, an emerging CAOS technology, provides a radiation-free, cost-effective, and portable alternative. While ultrasound offers new opportunities in CAOS, technical shortcomings continue to hinder its translation into surgery. In particular, due to the inherent limitations of ultrasound imaging, B-mode ultrasound typically captures only partial bone surfaces. The inter- and intra-operator variability in ultrasound scanning further increases the complexity of the data. Existing reconstruction methods struggle with such challenging data, leading to increased reconstruction errors and artifacts, such as holes and inflated structures. Effective techniques for accurately reconstructing open bone surfaces from real-world 3D ultrasound volumes remain lacking.

Methods:

We propose UltraBoneUDF, a self-supervised framework specifically designed for reconstructing open bone surfaces from ultrasound data. It learns unsigned distance functions (UDFs) from 3D ultrasound data. In addition, we present a novel loss function based on local tangent plane optimization that substantially improves surface reconstruction quality. UltraBoneUDF and competing models are benchmarked on three open-source datasets and further evaluated through ablation studies.

Results:

Qualitative results demonstrate the limitations of the state-of-the-art methods. Quantitatively, UltraBoneUDF achieves comparable or lower bi-directional Chamfer distance across three datasets with fewer parameters: 1.60 mm on the UltraBones100k dataset (25.5% improvement), 0.21 mm on the OpenBoneCT dataset, and 0.18 mm on the ClosedBoneCT dataset.

Conclusion:

UltraBoneUDF represents a promising solution for open bone surface reconstruction from 3D ultrasound volumes, with the potential to advance downstream applications in CAOS.
背景:骨表面重建是计算机辅助骨科手术(CAOS)的重要组成部分,是术前规划和术中指导的基础。与计算机断层扫描(CT)和磁共振成像(MRI)等传统成像方式相比,超声作为一种新兴的CAOS技术,提供了一种无辐射、成本效益高、便携的替代方案。虽然超声为CAOS提供了新的机会,但技术缺陷继续阻碍其向手术的转化。特别是,由于超声成像的固有局限性,b超通常只能捕获部分骨表面。超声扫描中操作员之间和操作员内部的可变性进一步增加了数据的复杂性。现有的重建方法难以处理这些具有挑战性的数据,导致重建误差和伪影增加,例如孔洞和膨胀结构。从真实世界的三维超声体积中准确重建开放骨表面的有效技术仍然缺乏。方法:我们提出了UltraBoneUDF,一个专门用于从超声数据重建开放骨表面的自监督框架。它从3D超声数据中学习无符号距离函数(udf)。此外,我们提出了一种新的基于局部切平面优化的损失函数,大大提高了表面重建的质量。UltraBoneUDF和竞争模型在三个开源数据集上进行基准测试,并通过消融研究进一步评估。结果:定性结果表明了最先进方法的局限性。在定量上,UltraBoneUDF在三个参数较少的数据集上实现了相当或更低的双向倒角距离:UltraBones100k数据集上的倒角距离为1.60 mm(≈25.5%),OpenBoneCT数据集上的倒角距离为0.21 mm, closebonect数据集上的倒角距离为0.18 mm。结论:UltraBoneUDF是一种很有前途的解决方案,可以从3D超声体积中重建开放骨表面,具有推进CAOS下游应用的潜力。
{"title":"UltraBoneUDF: Self-supervised bone surface reconstruction from ultrasound based on neural unsigned distance functions","authors":"Luohong Wu ,&nbsp;Matthias Seibold ,&nbsp;Nicola A. Cavalcanti ,&nbsp;Giuseppe Loggia ,&nbsp;Lisa Reissner ,&nbsp;Bastian Sigrist ,&nbsp;Jonas Hein ,&nbsp;Lilian Calvet ,&nbsp;Arnd Viehöfer ,&nbsp;Philipp Fürnstahl","doi":"10.1016/j.compmedimag.2025.102690","DOIUrl":"10.1016/j.compmedimag.2025.102690","url":null,"abstract":"<div><h3>Background:</h3><div>Bone surface reconstruction is an essential component of computer-assisted orthopedic surgery (CAOS), forming the foundation for both preoperative planning and intraoperative guidance. Compared to traditional imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI), ultrasound, an emerging CAOS technology, provides a radiation-free, cost-effective, and portable alternative. While ultrasound offers new opportunities in CAOS, technical shortcomings continue to hinder its translation into surgery. In particular, due to the inherent limitations of ultrasound imaging, B-mode ultrasound typically captures only partial bone surfaces. The inter- and intra-operator variability in ultrasound scanning further increases the complexity of the data. Existing reconstruction methods struggle with such challenging data, leading to increased reconstruction errors and artifacts, such as holes and inflated structures. Effective techniques for accurately reconstructing open bone surfaces from real-world 3D ultrasound volumes remain lacking.</div></div><div><h3>Methods:</h3><div>We propose UltraBoneUDF, a self-supervised framework specifically designed for reconstructing open bone surfaces from ultrasound data. It learns unsigned distance functions (UDFs) from 3D ultrasound data. In addition, we present a novel loss function based on local tangent plane optimization that substantially improves surface reconstruction quality. UltraBoneUDF and competing models are benchmarked on three open-source datasets and further evaluated through ablation studies.</div></div><div><h3>Results:</h3><div>Qualitative results demonstrate the limitations of the state-of-the-art methods. Quantitatively, UltraBoneUDF achieves comparable or lower bi-directional Chamfer distance across three datasets with fewer parameters: 1.60 mm on the UltraBones100k dataset (<span><math><mrow><mo>≈</mo><mn>25</mn><mo>.</mo><mn>5</mn><mtext>%</mtext></mrow></math></span> improvement), 0.21 mm on the OpenBoneCT dataset, and 0.18 mm on the ClosedBoneCT dataset.</div></div><div><h3>Conclusion:</h3><div>UltraBoneUDF represents a promising solution for open bone surface reconstruction from 3D ultrasound volumes, with the potential to advance downstream applications in CAOS.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"127 ","pages":"Article 102690"},"PeriodicalIF":4.9,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computerized Medical Imaging and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1