首页 > 最新文献

Medical image analysis最新文献

英文 中文
Comparative validation of surgical phase recognition, instrument keypoint estimation, and instrument instance segmentation in endoscopy: Results of the PhaKIR 2024 challenge 内镜手术阶段识别、仪器关键点估计和仪器实例分割的比较验证:PhaKIR 2024挑战的结果
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-14 DOI: 10.1016/j.media.2026.103945
Tobias Rueckert, David Rauber, Raphaela Maerkl, Leonard Klausmann, Suemeyye R. Yildiran, Max Gutbrod, Danilo Weber Nunes, Alvaro Fernandez Moreno, Imanol Luengo, Danail Stoyanov, Nicolas Toussaint, Enki Cho, Hyeon Bae Kim, Oh Sung Choo, Ka Young Kim, Seong Tae Kim, Gonçalo Arantes, Kehan Song, Jianjun Zhu, Junchen Xiong, Tingyi Lin, Shunsuke Kikuchi, Hiroki Matsuzaki, Atsushi Kouno, João Renato Ribeiro Manesco, João Paulo Papa, Tae-Min Choi, Tae Kyeong Jeong, Juyoun Park, Oluwatosin Alabi, Meng Wei, Tom Vercauteren, Runzhi Wu, Mengya Xu, An Wang, Long Bai, Hongliang Ren, Amine Yamlahi, Jakob Hennighausen, Lena Maier-Hein, Satoshi Kondo, Satoshi Kasai, Kousuke Hirasawa, Shu Yang, Yihui Wang, Hao Chen, Santiago Rodríguez, Nicolás Aparicio, Leonardo Manrique, Juan Camilo Lyons, Olivia Hosie, Nicolás Ayobi, Pablo Arbeláez, Yiping Li, Yasmina Al Khalil, Sahar Nasirihaghighi, Stefanie Speidel, Daniel Rueckert, Hubertus Feussner, Dirk Wilhelm, Christoph Palm
{"title":"Comparative validation of surgical phase recognition, instrument keypoint estimation, and instrument instance segmentation in endoscopy: Results of the PhaKIR 2024 challenge","authors":"Tobias Rueckert, David Rauber, Raphaela Maerkl, Leonard Klausmann, Suemeyye R. Yildiran, Max Gutbrod, Danilo Weber Nunes, Alvaro Fernandez Moreno, Imanol Luengo, Danail Stoyanov, Nicolas Toussaint, Enki Cho, Hyeon Bae Kim, Oh Sung Choo, Ka Young Kim, Seong Tae Kim, Gonçalo Arantes, Kehan Song, Jianjun Zhu, Junchen Xiong, Tingyi Lin, Shunsuke Kikuchi, Hiroki Matsuzaki, Atsushi Kouno, João Renato Ribeiro Manesco, João Paulo Papa, Tae-Min Choi, Tae Kyeong Jeong, Juyoun Park, Oluwatosin Alabi, Meng Wei, Tom Vercauteren, Runzhi Wu, Mengya Xu, An Wang, Long Bai, Hongliang Ren, Amine Yamlahi, Jakob Hennighausen, Lena Maier-Hein, Satoshi Kondo, Satoshi Kasai, Kousuke Hirasawa, Shu Yang, Yihui Wang, Hao Chen, Santiago Rodríguez, Nicolás Aparicio, Leonardo Manrique, Juan Camilo Lyons, Olivia Hosie, Nicolás Ayobi, Pablo Arbeláez, Yiping Li, Yasmina Al Khalil, Sahar Nasirihaghighi, Stefanie Speidel, Daniel Rueckert, Hubertus Feussner, Dirk Wilhelm, Christoph Palm","doi":"10.1016/j.media.2026.103945","DOIUrl":"https://doi.org/10.1016/j.media.2026.103945","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"3 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145961645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Diabetic Macular Edema Treatment Responses Using OCT: Dataset and Methods of APTOS Competition 使用OCT预测糖尿病黄斑水肿治疗反应:APTOS竞争的数据集和方法
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-13 DOI: 10.1016/j.media.2026.103942
Weiyi Zhang, Peranut Chotcomwongse, Yinwen Li, Pusheng Xu, Ruijie Yao, Lianhao Zhou, Yuxuan Zhou, Hui Feng, Qiping Zhou, Xinyue Wang, Shoujin Huang, Zihao Jin, Florence H T Chung, Shujun Wang, Yalin Zheng, Mingguang He, Danli Shi, Paisan Ruamviboonsuk
{"title":"Predicting Diabetic Macular Edema Treatment Responses Using OCT: Dataset and Methods of APTOS Competition","authors":"Weiyi Zhang, Peranut Chotcomwongse, Yinwen Li, Pusheng Xu, Ruijie Yao, Lianhao Zhou, Yuxuan Zhou, Hui Feng, Qiping Zhou, Xinyue Wang, Shoujin Huang, Zihao Jin, Florence H T Chung, Shujun Wang, Yalin Zheng, Mingguang He, Danli Shi, Paisan Ruamviboonsuk","doi":"10.1016/j.media.2026.103942","DOIUrl":"https://doi.org/10.1016/j.media.2026.103942","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"5 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145962446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depth-Induced Prompt Learning for Laparoscopic Liver Landmark Detection 深度诱导提示学习用于腹腔镜肝脏地标检测
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-13 DOI: 10.1016/j.media.2026.103940
Ruize Cui, Weixin Si, Zhixi Li, Kai Wang, Jialun Pei, Pheng-Ann Heng, Jing Qin
{"title":"Depth-Induced Prompt Learning for Laparoscopic Liver Landmark Detection","authors":"Ruize Cui, Weixin Si, Zhixi Li, Kai Wang, Jialun Pei, Pheng-Ann Heng, Jing Qin","doi":"10.1016/j.media.2026.103940","DOIUrl":"https://doi.org/10.1016/j.media.2026.103940","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"265 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145962447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust non-rigid image-to-patient registration for contactless dynamic thoracic tumor localization using recursive deformable diffusion models 基于递归可变形扩散模型的非接触动态胸部肿瘤定位鲁棒非刚性图像-患者配准
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-12 DOI: 10.1016/j.media.2026.103948
Dongyuan Li, Yixin Shan, Yuxuan Mao, Puxun Tu, Haochen Shi, Shenghao Huang, Weiyan Sun, Chang Chen, Xiaojun Chen
Deformable image-to-patient registration is essential for surgical navigation and medical imaging, yet real-time computation of spatial transformations across modalities remains a major clinical challenge—often being time-consuming, error-prone, and potentially increasing trauma or radiation exposure. While state-of-the-art methods achieve impressive speed and accuracy on paired medical images, they face notable limitations in cross-modal thoracic applications, where physiological motions such as respiration complicate tumor localization. To address this, we propose a robust, contactless, non-rigid registration framework for dynamic thoracic tumor localization. A highly efficient Recursive Deformable Diffusion Model (RDDM) is trained to reconstruct comprehensive 4DCT sequences from only end-inhalation and end-exhalation scans, capturing respiratory dynamics reflective of the intraoperative state. For real-time patient alignment, we introduce a contactless non-rigid registration algorithm based on GICP, leveraging patient skin surface point clouds captured by stereo RGB-D imaging. By incorporating normal vector and expansion–contraction constraints, the method enhances robustness and avoids local minima. The proposed framework was validated on publicly available datasets and volunteer trials. Quantitative evaluations demonstrated the RDDM’s anatomical fidelity across respiratory phases, achieving an PSNR of 34.01 ± 2.78 dB. Moreover, we have preliminarily developed a 4DCT-based registration and surgical navigation module to support tumor localization and high-precision tracking. Experimental results indicate that the proposed framework preliminarily meets clinical requirements and demonstrates potential for integration into downstream surgical systems.
可变形的图像到患者的配准对于外科导航和医学成像至关重要,然而跨模式的空间转换的实时计算仍然是一个主要的临床挑战-通常是耗时的,容易出错的,并且可能增加创伤或辐射暴露。虽然最先进的方法在配对医学图像上取得了令人印象深刻的速度和准确性,但它们在跨模态胸部应用中面临明显的局限性,其中生理运动如呼吸使肿瘤定位复杂化。为了解决这个问题,我们提出了一个鲁棒的,非接触的,非刚性的注册框架,用于动态胸部肿瘤定位。通过训练一个高效的递归可变形扩散模型(RDDM),仅从吸气末和呼气末扫描中重建全面的4DCT序列,捕捉反映术中状态的呼吸动力学。对于实时患者对齐,我们引入了一种基于GICP的非接触式非刚性配准算法,利用立体RGB-D成像捕获的患者皮肤表面点云。该方法通过引入法向量约束和扩张收缩约束,增强了鲁棒性,避免了局部极小值。提出的框架在公开可用的数据集和志愿者试验上得到了验证。定量评估显示RDDM在呼吸期的解剖保真度,PSNR为34.01±2.78 dB。此外,我们初步开发了基于4dct的配准和手术导航模块,支持肿瘤定位和高精度跟踪。实验结果表明,该框架初步满足临床需求,并具有整合到下游手术系统的潜力。
{"title":"Robust non-rigid image-to-patient registration for contactless dynamic thoracic tumor localization using recursive deformable diffusion models","authors":"Dongyuan Li, Yixin Shan, Yuxuan Mao, Puxun Tu, Haochen Shi, Shenghao Huang, Weiyan Sun, Chang Chen, Xiaojun Chen","doi":"10.1016/j.media.2026.103948","DOIUrl":"https://doi.org/10.1016/j.media.2026.103948","url":null,"abstract":"Deformable image-to-patient registration is essential for surgical navigation and medical imaging, yet real-time computation of spatial transformations across modalities remains a major clinical challenge—often being time-consuming, error-prone, and potentially increasing trauma or radiation exposure. While state-of-the-art methods achieve impressive speed and accuracy on paired medical images, they face notable limitations in cross-modal thoracic applications, where physiological motions such as respiration complicate tumor localization. To address this, we propose a robust, contactless, non-rigid registration framework for dynamic thoracic tumor localization. A highly efficient Recursive Deformable Diffusion Model (RDDM) is trained to reconstruct comprehensive 4DCT sequences from only end-inhalation and end-exhalation scans, capturing respiratory dynamics reflective of the intraoperative state. For real-time patient alignment, we introduce a contactless non-rigid registration algorithm based on GICP, leveraging patient skin surface point clouds captured by stereo RGB-D imaging. By incorporating normal vector and expansion–contraction constraints, the method enhances robustness and avoids local minima. The proposed framework was validated on publicly available datasets and volunteer trials. Quantitative evaluations demonstrated the RDDM’s anatomical fidelity across respiratory phases, achieving an PSNR of 34.01 ± 2.78 dB. Moreover, we have preliminarily developed a 4DCT-based registration and surgical navigation module to support tumor localization and high-precision tracking. Experimental results indicate that the proposed framework preliminarily meets clinical requirements and demonstrates potential for integration into downstream surgical systems.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"11 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145956932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
C2HFusion: Clinical context-driven hierarchical fusion of multimodal data for personalized and quantitative prognostic assessment in pancreatic cancer C2HFusion:临床情境驱动的多模式数据分层融合,用于胰腺癌的个性化和定量预后评估
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-11 DOI: 10.1016/j.media.2026.103937
Bolun Zeng, Yaolin Xu, Peng Wang, Tianyu Lu, Zongyu Xie, Mengsu Zeng, Jianjun Zhou, Liang Liu, Haitao Sun, Xiaojun Chen
Pancreatic ductal adenocarcinoma (PDAC) is a highly aggressive malignancy. Accurate prognostic modeling enables reliable risk stratification to identify patients most likely to benefit from adjuvant therapy, thereby facilitating individualized clinical management and potentially improving patient outcomes. Although recent deep learning approaches have shown promise in this area, their effectiveness is often constrained by fusion strategies that fail to fully capture the hierarchical and complementary information across heterogeneous clinical modalities. To address these limitations, we propose C2HFusion, a novel fusion framework inspired by clinical decision-making for personalized prognostic risk assessment. C2HFusion is unique in that it integrates multimodal data across multiple representational levels and structural forms. At the imaging level, it extracts and aggregates tumor-level features from multi-sequence MRI using cross-attention, effectively capturing complementary imaging patterns. At the patient level, it encodes structured data (e.g., laboratory results, demographics) and unstructured data (e.g., radiology reports) as contextual priors, which are then fused with imaging representations through a novel feature modulation mechanism. To further enhance this cross-level integration, a scalable Mixture-of-Clinical-Experts (MoCE) module dynamically routes different modalities through specialized branches and adaptively optimizes feature fusion for more robust multimodal modeling. Validation on multi-center real-world datasets covering 681 PDAC patients shows that C2HFusion consistently outperforms state-of-the-art methods in overall survival prediction, achieving over a 5% improvement in C-index. These results highlight its potential to improve prognostic accuracy and support more informed, personalized clinical decision-making.
胰导管腺癌是一种高度侵袭性的恶性肿瘤。准确的预后模型可以实现可靠的风险分层,以确定最有可能从辅助治疗中获益的患者,从而促进个体化临床管理,并有可能改善患者的预后。尽管最近的深度学习方法在这一领域显示出了希望,但它们的有效性往往受到融合策略的限制,这些策略无法完全捕获跨异质临床模式的分层和互补信息。为了解决这些局限性,我们提出了C2HFusion,这是一种受个性化预后风险评估临床决策启发的新型融合框架。C2HFusion的独特之处在于它集成了跨多个表示级别和结构形式的多模态数据。在成像水平,它通过交叉注意从多序列MRI中提取和聚集肿瘤水平的特征,有效地捕获互补的成像模式。在患者层面,它将结构化数据(例如,实验室结果,人口统计数据)和非结构化数据(例如,放射学报告)编码为上下文先验,然后通过一种新的特征调制机制将其与成像表示融合。为了进一步增强这种跨层集成,一个可扩展的临床专家混合(MoCE)模块通过专门的分支动态路由不同的模式,并自适应地优化特征融合,以实现更稳健的多模式建模。在覆盖681名PDAC患者的多中心真实数据集上的验证表明,C2HFusion在总体生存预测方面始终优于最先进的方法,c指数提高了5%以上。这些结果突出了它在提高预后准确性和支持更明智、个性化的临床决策方面的潜力。
{"title":"C2HFusion: Clinical context-driven hierarchical fusion of multimodal data for personalized and quantitative prognostic assessment in pancreatic cancer","authors":"Bolun Zeng, Yaolin Xu, Peng Wang, Tianyu Lu, Zongyu Xie, Mengsu Zeng, Jianjun Zhou, Liang Liu, Haitao Sun, Xiaojun Chen","doi":"10.1016/j.media.2026.103937","DOIUrl":"https://doi.org/10.1016/j.media.2026.103937","url":null,"abstract":"Pancreatic ductal adenocarcinoma (PDAC) is a highly aggressive malignancy. Accurate prognostic modeling enables reliable risk stratification to identify patients most likely to benefit from adjuvant therapy, thereby facilitating individualized clinical management and potentially improving patient outcomes. Although recent deep learning approaches have shown promise in this area, their effectiveness is often constrained by fusion strategies that fail to fully capture the hierarchical and complementary information across heterogeneous clinical modalities. To address these limitations, we propose C2HFusion, a novel fusion framework inspired by clinical decision-making for personalized prognostic risk assessment. C2HFusion is unique in that it integrates multimodal data across multiple representational levels and structural forms. At the imaging level, it extracts and aggregates tumor-level features from multi-sequence MRI using cross-attention, effectively capturing complementary imaging patterns. At the patient level, it encodes structured data (e.g., laboratory results, demographics) and unstructured data (e.g., radiology reports) as contextual priors, which are then fused with imaging representations through a novel feature modulation mechanism. To further enhance this cross-level integration, a scalable Mixture-of-Clinical-Experts (MoCE) module dynamically routes different modalities through specialized branches and adaptively optimizes feature fusion for more robust multimodal modeling. Validation on multi-center real-world datasets covering 681 PDAC patients shows that C2HFusion consistently outperforms state-of-the-art methods in overall survival prediction, achieving over a 5% improvement in C-index. These results highlight its potential to improve prognostic accuracy and support more informed, personalized clinical decision-making.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"347 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145956933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial appearance prediction for orthognathic surgery with diffusion models 用扩散模型预测正颌手术面部外观
IF 11.8 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-11 DOI: 10.1016/j.media.2026.103934
Jungwook Lee , Xuanang Xu , Daeseung Kim , Tianshu Kuang , Hannah H. Deng , Xinrui Song , Yasmine Soubra , Michael A.K. Liebschner , Jaime Gateno , Pingkun Yan
Orthognathic surgery corrects craniomaxillofacial deformities by repositioning skeletal structures to improve facial aesthetics and function. Conventional orthognathic surgical planning is largely bone-driven, where bone repositioning is first defined and soft-tissue outcomes are predicted. However, this is limited by its reliance on surgeon-defined bone plans and the inability to directly optimize for patient-specific aesthetic outcomes. To address these limitations, the soft-tissue-driven paradigm seeks to first predict a patient-specific optimal facial appearance and subsequently derive the skeletal changes required to achieve it. In this work, we introduce FAPOS (Facial Appearance Prediction for Orthognathic Surgery), a novel transformer-based latent diffusion framework that directly predicts a normal-looking 3D facial outcome from pre-operative scans to allow soft-tissue driven planning. FAPOS utilizes a dense 282-landmark representation and is trained on a combined dataset of 44,602 public 3D faces, overcoming limitations of data scarcity, lack of correspondence. Our three-phase training pipeline combines geometric encoding, latent diffusion modeling, and patient-specific conditioning. Quantitative and qualitative results show that FAPOS outperforms prior methods with improved facial symmetry and identity preservation. These results mark an important step toward enabling soft-tissue-driven surgical planning, with FAPOS providing an optimal facial target that serves as the basis for estimating the skeletal adjustments in subsequent stages.
正颌手术通过重新定位骨骼结构来纠正颅颌面畸形,以改善面部美学和功能。传统的正颌手术计划主要是骨驱动的,首先确定骨重新定位,并预测软组织的预后。然而,这是有限的,它依赖于外科医生定义的骨计划和无法直接优化患者特定的美学结果。为了解决这些限制,软组织驱动的范式寻求首先预测患者特定的最佳面部外观,然后得出实现它所需的骨骼变化。在这项工作中,我们介绍了FAPOS(正颌手术面部外观预测),这是一种新型的基于变压器的潜在扩散框架,可以直接预测术前扫描的正常外观3D面部结果,从而允许软组织驱动的计划。FAPOS利用密集的282个地标表示,并在44602个公共3D人脸的组合数据集上进行训练,克服了数据稀缺性和缺乏对应性的限制。我们的三相训练管道结合了几何编码、潜在扩散建模和患者特异性条件反射。定量和定性结果表明,FAPOS在改善面部对称性和身份保持方面优于先前的方法。这些结果标志着实现软组织驱动的手术计划的重要一步,FAPOS提供了最佳的面部目标,作为估计后续阶段骨骼调整的基础。
{"title":"Facial appearance prediction for orthognathic surgery with diffusion models","authors":"Jungwook Lee ,&nbsp;Xuanang Xu ,&nbsp;Daeseung Kim ,&nbsp;Tianshu Kuang ,&nbsp;Hannah H. Deng ,&nbsp;Xinrui Song ,&nbsp;Yasmine Soubra ,&nbsp;Michael A.K. Liebschner ,&nbsp;Jaime Gateno ,&nbsp;Pingkun Yan","doi":"10.1016/j.media.2026.103934","DOIUrl":"10.1016/j.media.2026.103934","url":null,"abstract":"<div><div>Orthognathic surgery corrects craniomaxillofacial deformities by repositioning skeletal structures to improve facial aesthetics and function. Conventional orthognathic surgical planning is largely bone-driven, where bone repositioning is first defined and soft-tissue outcomes are predicted. However, this is limited by its reliance on surgeon-defined bone plans and the inability to directly optimize for patient-specific aesthetic outcomes. To address these limitations, the soft-tissue-driven paradigm seeks to first predict a patient-specific optimal facial appearance and subsequently derive the skeletal changes required to achieve it. In this work, we introduce FAPOS (Facial Appearance Prediction for Orthognathic Surgery), a novel transformer-based latent diffusion framework that directly predicts a normal-looking 3D facial outcome from pre-operative scans to allow soft-tissue driven planning. FAPOS utilizes a dense 282-landmark representation and is trained on a combined dataset of 44,602 public 3D faces, overcoming limitations of data scarcity, lack of correspondence. Our three-phase training pipeline combines geometric encoding, latent diffusion modeling, and patient-specific conditioning. Quantitative and qualitative results show that FAPOS outperforms prior methods with improved facial symmetry and identity preservation. These results mark an important step toward enabling soft-tissue-driven surgical planning, with FAPOS providing an optimal facial target that serves as the basis for estimating the skeletal adjustments in subsequent stages.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"109 ","pages":"Article 103934"},"PeriodicalIF":11.8,"publicationDate":"2026-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145956956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UTMorph: A hybrid CNN-transformer network for weakly-supervised multimodal image registration in biopsy puncture UTMorph:用于活检穿刺弱监督多模态图像配准的混合CNN-Transformer网络
IF 11.8 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-10 DOI: 10.1016/j.media.2026.103938
Xudong Guo , Peiyu Chen , Haifeng Wang , Zhichao Yan , Qinfen Jiang , Rongjiang Wang , Ji Bin
Accurate registration of preoperative magnetic resonance imaging (MRI) and intraoperative ultrasound (US) images is essential to enhance the precision of biopsy punctures and targeted ablation procedures using robotic systems. To improve the speed and accuracy of registration algorithms while accounting for soft tissue deformation during puncture, we propose UTMorph, a hybrid framework consisting of convolutional neural network (CNN) and Transformer network, based on the U-Net architecture. This model is designed to enable efficient and deformable multimodal image registration. We introduced a novel attention mechanism that focuses on the structured features of images, thereby ensuring precise deformation estimation and reducing computational complexity. In addition, we proposed a hybrid edge loss function to complement the shape and boundary information, thereby improving registration accuracy. Experiments were conducted on data from 704 patients, including private datasets from Shanghai East Hospital, public datasets from The Cancer Imaging Archive, and the µ-ProReg Challenge. The performance of UTMorph was compared with that of six commonly used registration methods and loss functions. UTMorph achieved superior performance across multiple evaluation metrics (dice similarity coefficient: 0.890, 95th percentile Hausdorff distance: 2.679 mm, mean surface distance: 0.284 mm, and Jacobi determinant: 0.040) and ensures accurate registration with minimal memory usage, even under significant modal differences. These findings validate the effectiveness of the UTMorph model with the hybrid edge loss function for MR-US deformable medical image registration. This code is available at https://github.com/Prps7/UTMorph.
术前磁共振成像(MRI)和术中超声(US)图像的准确登记对于提高活检穿刺和使用机器人系统靶向消融手术的精度至关重要。为了提高配准算法的速度和准确性,同时考虑到穿刺过程中的软组织变形,我们提出了基于U-Net架构的卷积神经网络(CNN)和Transformer网络组成的混合框架UTMorph。该模型旨在实现高效和可变形的多模态图像配准。我们引入了一种新的关注机制,专注于图像的结构特征,从而确保精确的变形估计并降低计算复杂度。此外,我们提出了一种混合边缘损失函数来补充形状和边界信息,从而提高配准精度。实验使用了704例患者的数据,包括来自上海东方医院的私人数据集、来自癌症影像档案馆的公共数据集和µ-ProReg挑战。将UTMorph与六种常用的配准方法和损失函数的性能进行了比较。UTMorph在多个评估指标(骰子相似系数:0.890,第95百分位Hausdorff距离:2.679 mm,平均表面距离:0.284 mm, Jacobi行列式:0.040)上取得了优异的性能,即使在显著模态差异下,也能确保以最小的内存使用量进行准确注册。这些结果验证了带有混合边缘损失函数的UTMorph模型用于MR-US变形医学图像配准的有效性。此代码可从https://github.com/Prps7/UTMorph获得。
{"title":"UTMorph: A hybrid CNN-transformer network for weakly-supervised multimodal image registration in biopsy puncture","authors":"Xudong Guo ,&nbsp;Peiyu Chen ,&nbsp;Haifeng Wang ,&nbsp;Zhichao Yan ,&nbsp;Qinfen Jiang ,&nbsp;Rongjiang Wang ,&nbsp;Ji Bin","doi":"10.1016/j.media.2026.103938","DOIUrl":"10.1016/j.media.2026.103938","url":null,"abstract":"<div><div>Accurate registration of preoperative magnetic resonance imaging (MRI) and intraoperative ultrasound (US) images is essential to enhance the precision of biopsy punctures and targeted ablation procedures using robotic systems. To improve the speed and accuracy of registration algorithms while accounting for soft tissue deformation during puncture, we propose UTMorph, a hybrid framework consisting of convolutional neural network (CNN) and Transformer network, based on the U-Net architecture. This model is designed to enable efficient and deformable multimodal image registration. We introduced a novel attention mechanism that focuses on the structured features of images, thereby ensuring precise deformation estimation and reducing computational complexity. In addition, we proposed a hybrid edge loss function to complement the shape and boundary information, thereby improving registration accuracy. Experiments were conducted on data from 704 patients, including private datasets from Shanghai East Hospital, public datasets from The Cancer Imaging Archive, and the µ-ProReg Challenge. The performance of UTMorph was compared with that of six commonly used registration methods and loss functions. UTMorph achieved superior performance across multiple evaluation metrics (dice similarity coefficient: 0.890, 95th percentile Hausdorff distance: 2.679 mm, mean surface distance: 0.284 mm, and Jacobi determinant: 0.040) and ensures accurate registration with minimal memory usage, even under significant modal differences. These findings validate the effectiveness of the UTMorph model with the hybrid edge loss function for MR-US deformable medical image registration. This code is available at <span><span>https://github.com/Prps7/UTMorph</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"109 ","pages":"Article 103938"},"PeriodicalIF":11.8,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145956952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MegaSeg: Towards scalable semantic segmentation for megapixel images MegaSeg:面向百万像素图像的可扩展语义分割
IF 11.8 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-10 DOI: 10.1016/j.media.2026.103933
Solomon Kefas Kaura , Jialun Wu , Zeyu Gao , Chen Li
Megapixel image segmentation is essential for high-resolution histopathology image analysis, but is currently constrained by GPU memory limitations, necessitating patching and downsampling processing that compromises global and local context. This paper introduces MegaSeg, an end-to-end framework for semantic segmentation of megapixel images, leveraging streaming convolutional networks within a U-shaped architecture and a divide-and-conquer strategy. MegaSeg enables efficient semantic segmentation of 8192×8192 pixel images (67 MP) without sacrificing detail or structural context while significantly reducing memory usage. Furthermore, we propose the Attentive Dense Refinement Module (ADRM) to effectively retain and improve local details while capturing contextual information present in high-resolution images in the MegaSeg decoder path. Experiments on public histopathology datasets demonstrate superior performance, preserving both global structure and local details. In CAMELYON16, MegaSeg improves the Free Response Operating Characteristic (FROC) score from 0.78 to 0.89 when the input size is scaled from 4 MP to 67 MP, highlighting its effectiveness for large-scale medical image segmentation.
百万像素图像分割对于高分辨率组织病理学图像分析至关重要,但目前受到GPU内存限制的限制,需要进行补丁和降采样处理,从而危及全局和局部环境。本文介绍了MegaSeg,这是一个用于百万像素图像语义分割的端到端框架,利用u形架构和分而治之策略中的流卷积网络。MegaSeg支持8192×8192像素图像(67 MP)的有效语义分割,而不会牺牲细节或结构上下文,同时显着减少内存使用。此外,我们提出了细心密集细化模块(ADRM),以有效地保留和改进局部细节,同时捕获MegaSeg解码器路径中高分辨率图像中存在的上下文信息。在公共组织病理学数据集上的实验显示了优异的性能,既保留了全局结构又保留了局部细节。在CAMELYON16中,当输入大小从4 MP缩放到67 MP时,MegaSeg将自由响应操作特征(FROC)分数从0.78提高到0.89,突出了其对大规模医学图像分割的有效性。
{"title":"MegaSeg: Towards scalable semantic segmentation for megapixel images","authors":"Solomon Kefas Kaura ,&nbsp;Jialun Wu ,&nbsp;Zeyu Gao ,&nbsp;Chen Li","doi":"10.1016/j.media.2026.103933","DOIUrl":"10.1016/j.media.2026.103933","url":null,"abstract":"<div><div>Megapixel image segmentation is essential for high-resolution histopathology image analysis, but is currently constrained by GPU memory limitations, necessitating patching and downsampling processing that compromises global and local context. This paper introduces MegaSeg, an end-to-end framework for semantic segmentation of megapixel images, leveraging streaming convolutional networks within a U-shaped architecture and a divide-and-conquer strategy. MegaSeg enables efficient semantic segmentation of 8192×8192 pixel images (67 MP) without sacrificing detail or structural context while significantly reducing memory usage. Furthermore, we propose the Attentive Dense Refinement Module (ADRM) to effectively retain and improve local details while capturing contextual information present in high-resolution images in the MegaSeg decoder path. Experiments on public histopathology datasets demonstrate superior performance, preserving both global structure and local details. In CAMELYON16, MegaSeg improves the Free Response Operating Characteristic (FROC) score from 0.78 to 0.89 when the input size is scaled from 4 MP to 67 MP, highlighting its effectiveness for large-scale medical image segmentation.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"109 ","pages":"Article 103933"},"PeriodicalIF":11.8,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145956955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unlocking 2D/3D+T myocardial mechanics from cine MRI: a mechanically regularized space-time finite element correlation framework 从电影MRI解锁2D/3D+T心肌力学:机械正则化时空有限元关联框架
IF 11.8 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-10 DOI: 10.1016/j.media.2026.103944
Haizhou Liu , Xueling Qin , Zhou Liu , Yuxi Jin , Heng Jiang , Yunlong Gao , Jidong Han , Yijia Zheng , Heng Sun , Lingtao Mao , François Hild , Hairong Zheng , Dong Liang , Na Zhang , Jiuping Liang , Dehong Luo , Zhanli Hu
Accurate and biomechanically consistent quantification of cardiac motion remains a major challenge in cine MRI analysis. While classical feature-tracking and recent deep learning methods have improved frame-wise strain estimation, they often lack biomechanical interpretability and temporal coherence. In this study, we propose a spacetime-regularized finite-element digital image/volume correlation (FE-DIC/DVC) framework that enables 2D/3D+T myocardial motion tracking and strain analysis using only routine cine MRI. The method unifies Multiview alignment and 2D/3D+T motion estimation into a coherent pipeline, combining region-specific biomechanical regularization with data-driven based temporal decomposition to promote spatial fidelity and temporal consistency. A correlation-based Multiview alignment module further enhances anatomical consistency across short- and long-axis views. We evaluate the approach on one synthetic dataset (with ground-truth motion and strain fields), three public datasets (with ground-truth landmarks or myocardial masks), and a clinical dataset (with ground-truth myocardial masks). 2D+T motion and strain are evaluated across all datasets, whereas Multiview alignment and 3D+T motion estimation is assessed only on the clinical dataset. Compared with two classical feature-tracking methods and four state-of-the-art deep-learning baselines, the proposed method improves 2D+T motion and strain estimation accuracy as well as temporal consistency on the synthetic data, achieving a displacement RMSE of 0.35 pixels (vs. 0.73 pixels), an equivalent-strain RMSE of 0.05 (vs. 0.097), and a temporal consistency of 0.97 (vs. 0.91). On public and clinical data, it achieves superior performance in terms of a landmark error of 1.96 mm (vs. 3.15 mm), a boundary-tracking Dice of 0.80–0.87 (a 2–4% improvement over the best-performing baseline), and overall registration quality that consistently ranks among the top two methods. By leveraging only standard cine MRI, this work enables 2D/3D+T myocardial mechanics and provides a practical route toward 4D cardiac function assessment.
准确和生物力学一致的心脏运动量化仍然是电影MRI分析的主要挑战。虽然经典的特征跟踪和最近的深度学习方法已经改进了帧应变估计,但它们往往缺乏生物力学的可解释性和时间一致性。在这项研究中,我们提出了一个时空正则化的有限元数字图像/体积相关(FE-DIC/DVC)框架,该框架仅使用常规的电影MRI即可实现2D/3D+T心肌运动跟踪和应变分析。该方法将multiview对齐和2D/3D+T运动估计统一到一个连贯的管道中,将区域生物力学正则化与基于数据驱动的时间分解相结合,提高了空间保真度和时间一致性。基于相关性的多轴对齐模块进一步增强了跨短轴和长轴视图的解剖一致性。我们在一个合成数据集(含地真值运动和应变场)、三个公共数据集(含地真值地标或心肌掩模)和一个临床数据集(含地真值心肌掩模)上评估了该方法。2D+T运动和应变在所有数据集中进行评估,而multiview对齐和3D+T运动估计仅在临床数据集中进行评估。与两种经典特征跟踪方法和四条最先进的深度学习基线相比,该方法提高了2D+T运动和应变估计精度以及合成数据的时间一致性,位移RMSE为0.35像素(vs. 0.73像素),等效应变RMSE为0.05 (vs. 0.097),时间一致性为0.97 (vs. 0.91)。在公共和临床数据方面,它在1.96 mm的里程碑误差(对3.15 mm), 0.80-0.87的边界跟踪Dice(比表现最好的基线提高2-4%)和总体注册质量方面取得了卓越的性能,始终名列前两种方法之列。通过仅利用标准的电影MRI,这项工作实现了2D/3D+T心肌力学,并为4D心功能评估提供了实用的途径。
{"title":"Unlocking 2D/3D+T myocardial mechanics from cine MRI: a mechanically regularized space-time finite element correlation framework","authors":"Haizhou Liu ,&nbsp;Xueling Qin ,&nbsp;Zhou Liu ,&nbsp;Yuxi Jin ,&nbsp;Heng Jiang ,&nbsp;Yunlong Gao ,&nbsp;Jidong Han ,&nbsp;Yijia Zheng ,&nbsp;Heng Sun ,&nbsp;Lingtao Mao ,&nbsp;François Hild ,&nbsp;Hairong Zheng ,&nbsp;Dong Liang ,&nbsp;Na Zhang ,&nbsp;Jiuping Liang ,&nbsp;Dehong Luo ,&nbsp;Zhanli Hu","doi":"10.1016/j.media.2026.103944","DOIUrl":"10.1016/j.media.2026.103944","url":null,"abstract":"<div><div>Accurate and biomechanically consistent quantification of cardiac motion remains a major challenge in cine MRI analysis. While classical feature-tracking and recent deep learning methods have improved frame-wise strain estimation, they often lack biomechanical interpretability and temporal coherence. In this study, we propose a spacetime-regularized finite-element digital image/volume correlation (FE-DIC/DVC) framework that enables 2D/3D+T myocardial motion tracking and strain analysis using only routine cine MRI. The method unifies Multiview alignment and 2D/3D+T motion estimation into a coherent pipeline, combining region-specific biomechanical regularization with data-driven based temporal decomposition to promote spatial fidelity and temporal consistency. A correlation-based Multiview alignment module further enhances anatomical consistency across short- and long-axis views. We evaluate the approach on one synthetic dataset (with ground-truth motion and strain fields), three public datasets (with ground-truth landmarks or myocardial masks), and a clinical dataset (with ground-truth myocardial masks). 2D+T motion and strain are evaluated across all datasets, whereas Multiview alignment and 3D+T motion estimation is assessed only on the clinical dataset. Compared with two classical feature-tracking methods and four state-of-the-art deep-learning baselines, the proposed method improves 2D+T motion and strain estimation accuracy as well as temporal consistency on the synthetic data, achieving a displacement RMSE of 0.35 pixels (vs. 0.73 pixels), an equivalent-strain RMSE of 0.05 (vs. 0.097), and a temporal consistency of 0.97 (vs. 0.91). On public and clinical data, it achieves superior performance in terms of a landmark error of 1.96 mm (vs. 3.15 mm), a boundary-tracking Dice of 0.80–0.87 (a 2–4% improvement over the best-performing baseline), and overall registration quality that consistently ranks among the top two methods. By leveraging only standard cine MRI, this work enables 2D/3D+T myocardial mechanics and provides a practical route toward 4D cardiac function assessment.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"109 ","pages":"Article 103944"},"PeriodicalIF":11.8,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145956951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge-guided multi-geometric window transformer for cardiac cine MRI reconstruction 知识引导的心脏MRI重建多几何窗口变压器
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-09 DOI: 10.1016/j.media.2026.103936
Jun Lyu, Guangming Wang, Yunqi Wang, Chengyan Wang, Jing Qin
Magnetic resonance imaging (MRI) plays a crucial role in clinical diagnosis, yet traditional MR image acquisition often requires a prolonged duration, potentially causing patient discomfort and image artifacts. Faster and more accurate image reconstruction may alleviate patient discomfort during MRI examinations and enhance diagnostic accuracy and efficiency. In recent years, significant advancements in deep learning technology offer promise for improving MR image quality and accelerating acquisition. Addressing the demand for cardiac cine MRI reconstruction, we propose KGMgT, a novel MRI reconstruction network based on knowledge-guided approaches. The KGMgT model leverages adaptive spatiotemporal attention mechanisms to infer motion trajectories of adjacent cardiac frames, thereby better extracting complementary information. Additionally, we employ Transformer-driven dynamic feature aggregation to establish long-range dependencies, facilitating global information integration. Research findings demonstrate that the KGMgT model achieves state-of-the-art performance on multiple benchmark datasets, offering an efficient solution for cardiac cine MRI reconstruction. This collaborative approach, combining artificial intelligence technology to assist medical professionals in clinical decision-making, holds promise for significantly improving diagnostic efficiency, optimizing treatment plans, and enhancing the patient treatment experience. The code and trained models are available at https://github.com/MICV-Lab/KGMgT.
磁共振成像(MRI)在临床诊断中起着至关重要的作用,然而传统的磁共振图像采集通常需要较长的时间,可能会导致患者不适和图像伪影。更快更准确的图像重建可以减轻患者在MRI检查时的不适,提高诊断的准确性和效率。近年来,深度学习技术的重大进步为提高MR图像质量和加速采集提供了希望。针对心脏电影MRI重构的需求,我们提出了一种基于知识引导方法的新型MRI重构网络KGMgT。KGMgT模型利用自适应时空注意机制来推断相邻心脏帧的运动轨迹,从而更好地提取互补信息。此外,我们使用transformer驱动的动态特征聚合来建立远程依赖,促进全局信息集成。研究结果表明,KGMgT模型在多个基准数据集上达到了最先进的性能,为心脏影像MRI重建提供了有效的解决方案。这种协作方法结合人工智能技术来协助医疗专业人员进行临床决策,有望显著提高诊断效率,优化治疗方案,并增强患者的治疗体验。代码和经过训练的模型可在https://github.com/MICV-Lab/KGMgT上获得。
{"title":"Knowledge-guided multi-geometric window transformer for cardiac cine MRI reconstruction","authors":"Jun Lyu, Guangming Wang, Yunqi Wang, Chengyan Wang, Jing Qin","doi":"10.1016/j.media.2026.103936","DOIUrl":"https://doi.org/10.1016/j.media.2026.103936","url":null,"abstract":"Magnetic resonance imaging (MRI) plays a crucial role in clinical diagnosis, yet traditional MR image acquisition often requires a prolonged duration, potentially causing patient discomfort and image artifacts. Faster and more accurate image reconstruction may alleviate patient discomfort during MRI examinations and enhance diagnostic accuracy and efficiency. In recent years, significant advancements in deep learning technology offer promise for improving MR image quality and accelerating acquisition. Addressing the demand for cardiac cine MRI reconstruction, we propose KGMgT, a novel MRI reconstruction network based on knowledge-guided approaches. The KGMgT model leverages adaptive spatiotemporal attention mechanisms to infer motion trajectories of adjacent cardiac frames, thereby better extracting complementary information. Additionally, we employ Transformer-driven dynamic feature aggregation to establish long-range dependencies, facilitating global information integration. Research findings demonstrate that the KGMgT model achieves state-of-the-art performance on multiple benchmark datasets, offering an efficient solution for cardiac cine MRI reconstruction. This collaborative approach, combining artificial intelligence technology to assist medical professionals in clinical decision-making, holds promise for significantly improving diagnostic efficiency, optimizing treatment plans, and enhancing the patient treatment experience. The code and trained models are available at <ce:inter-ref xlink:href=\"https://github.com/MICV-Lab/KGMgT\" xlink:type=\"simple\">https://github.com/MICV-Lab/KGMgT</ce:inter-ref>.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"54 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145956958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image analysis
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1