首页 > 最新文献

Computerized Medical Imaging and Graphics最新文献

英文 中文
Adjacent point aided vertebral landmark detection and Cobb angle measurement for automated AIS diagnosis
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-30 DOI: 10.1016/j.compmedimag.2025.102496
Xiaopeng Du , Hongyu Wang , Lihang Jiang , Changlin Lv , Yongming Xi , Huan Yang
Adolescent Idiopathic Scoliosis (AIS) is a prevalent structural deformity disease of human spine, and accurate assessment of spinal anatomical parameters is essential for clinical diagnosis and treatment planning. In recent years, significant progress has been made in automatic AIS diagnosis based on deep learning methods. However, effectively utilizing spinal structure information to improve the parameter measurement and diagnosis accuracy from spinal X-ray images remains challenging. This paper proposes a novel spine keypoint detection framework to complete the intelligent diagnosis of AIS, with the assistance of spine rigid structure information. Specifically, a deep learning architecture called Landmark and Adjacent offset Detection (LAD-Net) is designed to predict spine centre and corner points as well as their related offset vectors, based on which error-detected landmarks can be effectively corrected via the proposed Adjacent Centre Iterative Correction (ACIC) and Corner Feature Optimization and Fusion (CFOF) modules. Based on the detected spine landmarks, spine key parameters (i.e. Cobb angles) can be computed to finish the AIS Lenke diagnosis. Experimental results demonstrate the superiority of the proposed framework on spine landmark detection and Lenke classification, providing strong support for AIS diagnosis and treatment.
{"title":"Adjacent point aided vertebral landmark detection and Cobb angle measurement for automated AIS diagnosis","authors":"Xiaopeng Du ,&nbsp;Hongyu Wang ,&nbsp;Lihang Jiang ,&nbsp;Changlin Lv ,&nbsp;Yongming Xi ,&nbsp;Huan Yang","doi":"10.1016/j.compmedimag.2025.102496","DOIUrl":"10.1016/j.compmedimag.2025.102496","url":null,"abstract":"<div><div>Adolescent Idiopathic Scoliosis (AIS) is a prevalent structural deformity disease of human spine, and accurate assessment of spinal anatomical parameters is essential for clinical diagnosis and treatment planning. In recent years, significant progress has been made in automatic AIS diagnosis based on deep learning methods. However, effectively utilizing spinal structure information to improve the parameter measurement and diagnosis accuracy from spinal X-ray images remains challenging. This paper proposes a novel spine keypoint detection framework to complete the intelligent diagnosis of AIS, with the assistance of spine rigid structure information. Specifically, a deep learning architecture called Landmark and Adjacent offset Detection (LAD-Net) is designed to predict spine centre and corner points as well as their related offset vectors, based on which error-detected landmarks can be effectively corrected via the proposed Adjacent Centre Iterative Correction (ACIC) and Corner Feature Optimization and Fusion (CFOF) modules. Based on the detected spine landmarks, spine key parameters (<em>i.e</em>. Cobb angles) can be computed to finish the AIS Lenke diagnosis. Experimental results demonstrate the superiority of the proposed framework on spine landmark detection and Lenke classification, providing strong support for AIS diagnosis and treatment.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102496"},"PeriodicalIF":5.4,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143179161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Cross Global Learnable Attention Network for MR images denoising with arbitrary modal missing
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-30 DOI: 10.1016/j.compmedimag.2025.102497
Mingfu Jiang , Shuai Wang , Ka-Hou Chan , Yue Sun , Yi Xu , Zhuoneng Zhang , Qinquan Gao , Zhifan Gao , Tong Tong , Hing-Chiu Chang , Tao Tan
Magnetic Resonance Imaging (MRI) generates medical images of multiple sequences, i.e., multimodal, from different contrasts. However, noise will reduce the quality of MR images, and then affect the doctor’s diagnosis of diseases. Existing filtering methods, transform-domain methods, statistical methods and Convolutional Neural Network (CNN) methods main aim to denoise individual sequences of images without considering the relationships between multiple different sequences. They cannot balance the extraction of high-dimensional and low-dimensional features in MR images, and hard to maintain a good balance between preserving image texture details and denoising strength. To overcome these challenges, this work proposes a controllable Multimodal Cross-Global Learnable Attention Network (MMCGLANet) for MR image denoising with Arbitrary Modal Missing. Specifically, Encoder is employed to extract the shallow features of the image which share weight module, and Convolutional Long Short-Term Memory(ConvLSTM) is employed to extract the associated features between different frames within the same modal. Cross Global Learnable Attention Network(CGLANet) is employed to extract and fuse image features between multimodal and within the same modality. In addition, sequence code is employed to label missing modalities, which allows for Arbitrary Modal Missing during model training, validation, and testing. Experimental results demonstrate that our method has achieved good denoising results on different public and real MR image dataset.
{"title":"Multimodal Cross Global Learnable Attention Network for MR images denoising with arbitrary modal missing","authors":"Mingfu Jiang ,&nbsp;Shuai Wang ,&nbsp;Ka-Hou Chan ,&nbsp;Yue Sun ,&nbsp;Yi Xu ,&nbsp;Zhuoneng Zhang ,&nbsp;Qinquan Gao ,&nbsp;Zhifan Gao ,&nbsp;Tong Tong ,&nbsp;Hing-Chiu Chang ,&nbsp;Tao Tan","doi":"10.1016/j.compmedimag.2025.102497","DOIUrl":"10.1016/j.compmedimag.2025.102497","url":null,"abstract":"<div><div>Magnetic Resonance Imaging (MRI) generates medical images of multiple sequences, i.e., multimodal, from different contrasts. However, noise will reduce the quality of MR images, and then affect the doctor’s diagnosis of diseases. Existing filtering methods, transform-domain methods, statistical methods and Convolutional Neural Network (CNN) methods main aim to denoise individual sequences of images without considering the relationships between multiple different sequences. They cannot balance the extraction of high-dimensional and low-dimensional features in MR images, and hard to maintain a good balance between preserving image texture details and denoising strength. To overcome these challenges, this work proposes a controllable Multimodal Cross-Global Learnable Attention Network (MMCGLANet) for MR image denoising with Arbitrary Modal Missing. Specifically, Encoder is employed to extract the shallow features of the image which share weight module, and Convolutional Long Short-Term Memory(ConvLSTM) is employed to extract the associated features between different frames within the same modal. Cross Global Learnable Attention Network(CGLANet) is employed to extract and fuse image features between multimodal and within the same modality. In addition, sequence code is employed to label missing modalities, which allows for Arbitrary Modal Missing during model training, validation, and testing. Experimental results demonstrate that our method has achieved good denoising results on different public and real MR image dataset.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102497"},"PeriodicalIF":5.4,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143179162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Entity-level multiple instance learning for mesoscopic histopathology images classification with Bayesian collaborative learning and pathological prior transfer
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-27 DOI: 10.1016/j.compmedimag.2025.102495
Qiming He , Yingming Xu , Qiang Huang , Jing Li , Yonghong He , Zhe Wang , Tian Guan

Background:

Entity-level pathologic structures with independent structures and functions are at a mesoscopic scale between the cell-level and slide-level, containing limited structures thus providing fewer instances for multiple instance learning. This restricts the perception of local pathologic features and their relationships, causing semantic ambiguity and inefficiency of entity embedding.

Method:

This study proposes a novel entity-level multiple instance learning. To realize entity-level augmentation, entity component mixup enhances the capture of relationships of contextually localized pathology features. To strengthen the semantic synergy of global and local pathological features, Bayesian collaborative learning is proposed to construct co-optimization of instance and bag embedding. Additionally, pathological prior transfer implement the initial optimization of the global attention pooling thereby fundamentally improving entity embedding.

Results:

This study constructed a glomerular image dataset containing up to 23 types of lesion patterns. Intensive experiments demonstrate that the proposed framework achieves the best on 19 out of 23 types, with AUC exceeding 90% and 95% on 20 and 11 types, respectively. Moreover, the proposed model achieves up to 18.9% and 14.7% improvements compared to the thumbnail-level and slide-level methods. Ablation study and visualization further reveals this method synergistically strengthens the feature representations under the condition of fewer instances.

Conclusion:

The proposed entity-level multiple instance learning enables accurate recognition of 23 types of lesion patterns, providing an effective tool for mesoscopic histopathology images classification. This proves it is capable of capturing salient pathologic features and contextual relationships from the fewer instances, which can be extended to classify other pathologic entities.
{"title":"Entity-level multiple instance learning for mesoscopic histopathology images classification with Bayesian collaborative learning and pathological prior transfer","authors":"Qiming He ,&nbsp;Yingming Xu ,&nbsp;Qiang Huang ,&nbsp;Jing Li ,&nbsp;Yonghong He ,&nbsp;Zhe Wang ,&nbsp;Tian Guan","doi":"10.1016/j.compmedimag.2025.102495","DOIUrl":"10.1016/j.compmedimag.2025.102495","url":null,"abstract":"<div><h3>Background:</h3><div>Entity-level pathologic structures with independent structures and functions are at a mesoscopic scale between the cell-level and slide-level, containing limited structures thus providing fewer instances for multiple instance learning. This restricts the perception of local pathologic features and their relationships, causing semantic ambiguity and inefficiency of entity embedding.</div></div><div><h3>Method:</h3><div>This study proposes a novel entity-level multiple instance learning. To realize entity-level augmentation, entity component mixup enhances the capture of relationships of contextually localized pathology features. To strengthen the semantic synergy of global and local pathological features, Bayesian collaborative learning is proposed to construct co-optimization of instance and bag embedding. Additionally, pathological prior transfer implement the initial optimization of the global attention pooling thereby fundamentally improving entity embedding.</div></div><div><h3>Results:</h3><div>This study constructed a glomerular image dataset containing up to 23 types of lesion patterns. Intensive experiments demonstrate that the proposed framework achieves the best on 19 out of 23 types, with AUC exceeding 90<span><math><mtext>%</mtext></math></span> and 95<span><math><mtext>%</mtext></math></span> on 20 and 11 types, respectively. Moreover, the proposed model achieves up to 18.9<span><math><mtext>%</mtext></math></span> and 14.7<span><math><mtext>%</mtext></math></span> improvements compared to the thumbnail-level and slide-level methods. Ablation study and visualization further reveals this method synergistically strengthens the feature representations under the condition of fewer instances.</div></div><div><h3>Conclusion:</h3><div>The proposed entity-level multiple instance learning enables accurate recognition of 23 types of lesion patterns, providing an effective tool for mesoscopic histopathology images classification. This proves it is capable of capturing salient pathologic features and contextual relationships from the fewer instances, which can be extended to classify other pathologic entities.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102495"},"PeriodicalIF":5.4,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143234795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature-targeted deep learning framework for pulmonary tumorous Cone-beam CT (CBCT) enhancement with multi-task customized perceptual loss and feature-guided CycleGAN
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-26 DOI: 10.1016/j.compmedimag.2024.102487
Jiarui Zhu , Hongfei Sun , Weixing Chen , Shaohua Zhi , Chenyang Liu , Mayang Zhao , Yuanpeng Zhang , Ta Zhou , Yu Lap Lam , Tao Peng , Jing Qin , Lina Zhao , Jing Cai , Ge Ren
Thoracic Cone-beam computed tomography (CBCT) is routinely collected during image-guided radiation therapy (IGRT) to provide updated patient anatomy information for lung cancer treatments. However, CBCT images often suffer from streaking artifacts and noise caused by under-rate sampling projections and low-dose exposure, resulting in loss of lung anatomy which contains crucial pulmonary tumorous and functional information. While recent deep learning-based CBCT enhancement methods have shown promising results in suppressing artifacts, they have limited performance on preserving anatomical details containing crucial tumorous information due to lack of targeted guidance. To address this issue, we propose a novel feature-targeted deep learning framework which generates ultra-quality pulmonary imaging from CBCT of lung cancer patients via a multi-task customized feature-to-feature perceptual loss function and a feature-guided CycleGAN. The framework comprises two main components: a multi-task learning feature-selection network (MTFS-Net) for building up a customized feature-to-feature perceptual loss function (CFP-loss); and a feature-guided CycleGan network. Our experiments showed that the proposed framework can generate synthesized CT (sCT) images for the lung that achieved a high similarity to CT images, with an average SSIM index of 0.9747 and an average PSNR index of 38.5995 globally, and an average Pearman’s coefficient of 0.8929 within the tumor region on multi-institutional datasets. The sCT images also achieved visually pleasing performance with effective artifacts suppression, noise reduction, and distinctive anatomical details preservation. Functional imaging tests further demonstrated the pulmonary texture correction performance of the sCT images, and the similarity of the functional imaging generated from sCT and CT images has reached an average DSC value of 0.9147, SCC value of 0.9615 and R value of 0.9661. Comparison experiments with pixel-to-pixel loss also showed that the proposed perceptual loss significantly enhances the performance of involved generative models. Our experiment results indicate that the proposed framework outperforms the state-of-the-art models for pulmonary CBCT enhancement. This framework holds great promise for generating high-quality pulmonary imaging from CBCT that is suitable for supporting further analysis of lung cancer treatment.
{"title":"Feature-targeted deep learning framework for pulmonary tumorous Cone-beam CT (CBCT) enhancement with multi-task customized perceptual loss and feature-guided CycleGAN","authors":"Jiarui Zhu ,&nbsp;Hongfei Sun ,&nbsp;Weixing Chen ,&nbsp;Shaohua Zhi ,&nbsp;Chenyang Liu ,&nbsp;Mayang Zhao ,&nbsp;Yuanpeng Zhang ,&nbsp;Ta Zhou ,&nbsp;Yu Lap Lam ,&nbsp;Tao Peng ,&nbsp;Jing Qin ,&nbsp;Lina Zhao ,&nbsp;Jing Cai ,&nbsp;Ge Ren","doi":"10.1016/j.compmedimag.2024.102487","DOIUrl":"10.1016/j.compmedimag.2024.102487","url":null,"abstract":"<div><div>Thoracic Cone-beam computed tomography (CBCT) is routinely collected during image-guided radiation therapy (IGRT) to provide updated patient anatomy information for lung cancer treatments. However, CBCT images often suffer from streaking artifacts and noise caused by under-rate sampling projections and low-dose exposure, resulting in loss of lung anatomy which contains crucial pulmonary tumorous and functional information. While recent deep learning-based CBCT enhancement methods have shown promising results in suppressing artifacts, they have limited performance on preserving anatomical details containing crucial tumorous information due to lack of targeted guidance. To address this issue, we propose a novel feature-targeted deep learning framework which generates ultra-quality pulmonary imaging from CBCT of lung cancer patients via a multi-task customized feature-to-feature perceptual loss function and a feature-guided CycleGAN. The framework comprises two main components: a multi-task learning feature-selection network (MTFS-Net) for building up a customized feature-to-feature perceptual loss function (CFP-loss); and a feature-guided CycleGan network. Our experiments showed that the proposed framework can generate synthesized CT (sCT) images for the lung that achieved a high similarity to CT images, with an average SSIM index of 0.9747 and an average PSNR index of 38.5995 globally, and an average Pearman’s coefficient of 0.8929 within the tumor region on multi-institutional datasets. The sCT images also achieved visually pleasing performance with effective artifacts suppression, noise reduction, and distinctive anatomical details preservation. Functional imaging tests further demonstrated the pulmonary texture correction performance of the sCT images, and the similarity of the functional imaging generated from sCT and CT images has reached an average DSC value of 0.9147, SCC value of 0.9615 and R value of 0.9661. Comparison experiments with pixel-to-pixel loss also showed that the proposed perceptual loss significantly enhances the performance of involved generative models. Our experiment results indicate that the proposed framework outperforms the state-of-the-art models for pulmonary CBCT enhancement. This framework holds great promise for generating high-quality pulmonary imaging from CBCT that is suitable for supporting further analysis of lung cancer treatment.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102487"},"PeriodicalIF":5.4,"publicationDate":"2025-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143076326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrastive learning in brain imaging
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-26 DOI: 10.1016/j.compmedimag.2025.102500
Xiaoyin Xu , Stephen T.C. Wong
Contrastive learning is a type of deep learning technique trying to classify data or examples without requiring data labeling. Instead, it learns about the most representative features that contrast positive and negative pairs of examples. In literature of contrastive learning, terms of positive examples and negative examples do not mean whether the examples themselves are positive or negative of certain characteristics as one might encounter in medicine. Rather, positive examples just mean that the examples are of the same class, while negative examples mean that the examples are of different classes. Contrastive learning maps data to a latent space and works under the assumption that examples of the same class should be located close to each other in the latent space; and examples from different classes would locate far from each other. In other words, contrastive learning can be considered as a discriminator that tries to group examples of the same class together while separating examples of different classes from each other, preferably as far as possible. Since its inception, contrastive learning has been constantly evolving and can be realized as self-supervised, semi-supervised, or unsupervised learning. Contrastive learning has found wide applications in medical imaging and it is expected it will play an increasingly important role in medical image processing and analysis.
{"title":"Contrastive learning in brain imaging","authors":"Xiaoyin Xu ,&nbsp;Stephen T.C. Wong","doi":"10.1016/j.compmedimag.2025.102500","DOIUrl":"10.1016/j.compmedimag.2025.102500","url":null,"abstract":"<div><div>Contrastive learning is a type of deep learning technique trying to classify data or examples without requiring data labeling. Instead, it learns about the most representative features that contrast positive and negative pairs of examples. In literature of contrastive learning, terms of positive examples and negative examples do not mean whether the examples themselves are positive or negative of certain characteristics as one might encounter in medicine. Rather, positive examples just mean that the examples are of the same class, while negative examples mean that the examples are of different classes. Contrastive learning maps data to a latent space and works under the assumption that examples of the same class should be located close to each other in the latent space; and examples from different classes would locate far from each other. In other words, contrastive learning can be considered as a discriminator that tries to group examples of the same class together while separating examples of different classes from each other, preferably as far as possible. Since its inception, contrastive learning has been constantly evolving and can be realized as self-supervised, semi-supervised, or unsupervised learning. Contrastive learning has found wide applications in medical imaging and it is expected it will play an increasingly important role in medical image processing and analysis.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102500"},"PeriodicalIF":5.4,"publicationDate":"2025-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143076316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Opportunistic AI for enhanced cardiovascular disease risk stratification using abdominal CT scans
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-20 DOI: 10.1016/j.compmedimag.2025.102493
Azka Rehman , Jaewon Kim , Lee Hyeokjong , Jooyoung Chang , Sang Min Park
This study introduces the Deep Learning-based Cardiovascular Disease Incident (DL-CVDi) score, a novel biomarker derived from routine abdominal CT scans, optimized to predict cardiovascular disease (CVD) risk using deep survival learning. CT imaging, frequently used for diagnosing various conditions, contains opportunistic biomarkers that can be leveraged beyond their initial diagnostic purpose. Using a Cox proportional hazards-based survival loss, the DL-CVDi score captures complex, non-linear relationships between anatomical features and CVD risk. Clinical validation demonstrated that participants with high DL-CVDi scores had a significantly elevated risk of CVD incidents (hazard ratio [HR]: 2.75, 95% CI: 1.27–5.95, p-trend <0.005) after adjusting for traditional risk factors. Additionally, the DL-CVDi score improved the concordance of baseline models, such as age and sex (from 0.662 to 0.700) and the Framingham Risk Score (from 0.697 to 0.742). Given its reliance on widely available abdominal CT data, the DL-CVDi score has substantial potential as an opportunistic screening tool for CVD risk in diverse clinical settings. Future research should validate these findings across multi-ethnic cohorts and explore its utility in patients with comorbid conditions, establishing the DL-CVDi score as a valuable addition to current CVD risk assessment strategies.
{"title":"Opportunistic AI for enhanced cardiovascular disease risk stratification using abdominal CT scans","authors":"Azka Rehman ,&nbsp;Jaewon Kim ,&nbsp;Lee Hyeokjong ,&nbsp;Jooyoung Chang ,&nbsp;Sang Min Park","doi":"10.1016/j.compmedimag.2025.102493","DOIUrl":"10.1016/j.compmedimag.2025.102493","url":null,"abstract":"<div><div>This study introduces the Deep Learning-based Cardiovascular Disease Incident (DL-CVDi) score, a novel biomarker derived from routine abdominal CT scans, optimized to predict cardiovascular disease (CVD) risk using deep survival learning. CT imaging, frequently used for diagnosing various conditions, contains opportunistic biomarkers that can be leveraged beyond their initial diagnostic purpose. Using a Cox proportional hazards-based survival loss, the DL-CVDi score captures complex, non-linear relationships between anatomical features and CVD risk. Clinical validation demonstrated that participants with high DL-CVDi scores had a significantly elevated risk of CVD incidents (hazard ratio [HR]: 2.75, 95% CI: 1.27–5.95, p-trend <span><math><mo>&lt;</mo></math></span>0.005) after adjusting for traditional risk factors. Additionally, the DL-CVDi score improved the concordance of baseline models, such as age and sex (from 0.662 to 0.700) and the Framingham Risk Score (from 0.697 to 0.742). Given its reliance on widely available abdominal CT data, the DL-CVDi score has substantial potential as an opportunistic screening tool for CVD risk in diverse clinical settings. Future research should validate these findings across multi-ethnic cohorts and explore its utility in patients with comorbid conditions, establishing the DL-CVDi score as a valuable addition to current CVD risk assessment strategies.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102493"},"PeriodicalIF":5.4,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143043246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A graph neural network-based model with out-of-distribution robustness for enhancing antiretroviral therapy outcome prediction for HIV-1 基于分布外鲁棒性的图神经网络模型用于增强HIV-1抗逆转录病毒治疗结果预测。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-10 DOI: 10.1016/j.compmedimag.2024.102484
Giulia Di Teodoro , Federico Siciliano , Valerio Guarrasi , Anne-Mieke Vandamme , Valeria Ghisetti , Anders Sönnerborg , Maurizio Zazzi , Fabrizio Silvestri , Laura Palagi
Predicting the outcome of antiretroviral therapies (ART) for HIV-1 is a pressing clinical challenge, especially when the ART includes drugs with limited effectiveness data. This scarcity of data can arise either due to the introduction of a new drug to the market or due to limited use in clinical settings, resulting in clinical dataset with highly unbalanced therapy representation. To tackle this issue, we introduce a novel joint fusion model, which combines features from a Fully Connected (FC) Neural Network and a Graph Neural Network (GNN) in a multi-modality fashion. Our model uses both tabular data about genetic sequences and a knowledge base derived from Stanford drug-resistance mutation tables, which serve as benchmark references for deducing in-vivo treatment efficacy based on the viral genetic sequence. By leveraging this knowledge base structured as a graph, the GNN component enables our model to adapt to imbalanced data distributions and account for Out-of-Distribution (OoD) drugs. We evaluated these models’ robustness against OoD drugs in the test set. Our comprehensive analysis demonstrates that the proposed model consistently outperforms the FC model. These results underscore the advantage of integrating Stanford scores in the model, thereby enhancing its generalizability and robustness, but also extending its utility in contributing in more informed clinical decisions with limited data availability. The source code is available at https://github.com/federicosiciliano/graph-ood-hiv.
预测针对HIV-1的抗逆转录病毒治疗(ART)的结果是一项紧迫的临床挑战,特别是当ART包括有效性数据有限的药物时。这种数据的稀缺性可能是由于向市场推出新药或由于临床环境中的使用有限,导致临床数据集具有高度不平衡的治疗代表性。为了解决这个问题,我们引入了一种新的联合融合模型,该模型以多模态方式结合了全连接(FC)神经网络和图神经网络(GNN)的特征。我们的模型使用了关于基因序列的表格数据和来自斯坦福耐药突变表的知识库,这可以作为基于病毒基因序列推断体内治疗效果的基准参考。通过利用这个以图的形式构建的知识库,GNN组件使我们的模型能够适应不平衡的数据分布,并考虑到非分布(OoD)药物。我们在测试集中评估了这些模型对OoD药物的鲁棒性。我们的综合分析表明,所提出的模型始终优于FC模型。这些结果强调了将斯坦福分数整合到模型中的优势,从而增强了其通用性和稳健性,同时也扩展了其在有限数据可用性下为更明智的临床决策做出贡献的效用。源代码可从https://github.com/federicosiciliano/graph-ood-hiv获得。
{"title":"A graph neural network-based model with out-of-distribution robustness for enhancing antiretroviral therapy outcome prediction for HIV-1","authors":"Giulia Di Teodoro ,&nbsp;Federico Siciliano ,&nbsp;Valerio Guarrasi ,&nbsp;Anne-Mieke Vandamme ,&nbsp;Valeria Ghisetti ,&nbsp;Anders Sönnerborg ,&nbsp;Maurizio Zazzi ,&nbsp;Fabrizio Silvestri ,&nbsp;Laura Palagi","doi":"10.1016/j.compmedimag.2024.102484","DOIUrl":"10.1016/j.compmedimag.2024.102484","url":null,"abstract":"<div><div>Predicting the outcome of antiretroviral therapies (ART) for HIV-1 is a pressing clinical challenge, especially when the ART includes drugs with limited effectiveness data. This scarcity of data can arise either due to the introduction of a new drug to the market or due to limited use in clinical settings, resulting in clinical dataset with highly unbalanced therapy representation. To tackle this issue, we introduce a novel joint fusion model, which combines features from a Fully Connected (FC) Neural Network and a Graph Neural Network (GNN) in a multi-modality fashion. Our model uses both tabular data about genetic sequences and a knowledge base derived from Stanford drug-resistance mutation tables, which serve as benchmark references for deducing in-vivo treatment efficacy based on the viral genetic sequence. By leveraging this knowledge base structured as a graph, the GNN component enables our model to adapt to imbalanced data distributions and account for Out-of-Distribution (OoD) drugs. We evaluated these models’ robustness against OoD drugs in the test set. Our comprehensive analysis demonstrates that the proposed model consistently outperforms the FC model. These results underscore the advantage of integrating Stanford scores in the model, thereby enhancing its generalizability and robustness, but also extending its utility in contributing in more informed clinical decisions with limited data availability. The source code is available at <span><span>https://github.com/federicosiciliano/graph-ood-hiv</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102484"},"PeriodicalIF":5.4,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142985489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PADS-Net: GAN-based radiomics using multi-task network of denoising and segmentation for ultrasonic diagnosis of Parkinson disease PADS-Net:基于gan的多任务去噪和分割网络放射组学用于帕金森病的超声诊断。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-08 DOI: 10.1016/j.compmedimag.2024.102490
Yiwen Shen , Li Chen , Jieyi Liu , Haobo Chen , Changyan Wang , Hong Ding , Qi Zhang
Parkinson disease (PD) is a prevalent neurodegenerative disorder, and its accurate diagnosis is crucial for timely intervention. We propose the PArkinson disease Denoising and Segmentation Network (PADS-Net), to simultaneously denoise and segment transcranial ultrasound images of midbrain for accurate PD diagnosis. The PADS-Net is built upon generative adversarial networks and incorporates a multi-task deep learning framework aimed at optimizing the tasks of denoising and segmentation for ultrasound images. A composite loss function including the mean absolute error, the mean squared error and the Dice loss, is adopted in the PADS-Net to effectively capture image details. The PADS-Net also integrates radiomics techniques for PD diagnosis by exploiting high-throughput features from ultrasound images. A four-branch ensemble diagnostic model is designed by utilizing two “wings” of the butterfly-shaped midbrain regions on both ipsilateral and contralateral images to enhance the accuracy of PD diagnosis. Experimental results demonstrate that the PADS-Net not only reduced speckle noise, achieving the edge-to-noise ratio of 16.90, but also attained a Dice coefficient of 0.91 for midbrain segmentation. The PADS-Net finally achieved an area under the receiver operating characteristic curve as high as 0.87 for diagnosis of PD. Our PADS-Net excels in transcranial ultrasound image denoising and segmentation and offers a potential clinical solution to accurate PD assessment.
帕金森病(PD)是一种常见的神经退行性疾病,准确诊断对及时干预至关重要。我们提出了帕金森病去噪和分割网络(PADS-Net),用于同时对中脑的经颅超声图像进行去噪和分割,以准确诊断帕金森病。PADS-Net 建立在生成对抗网络的基础上,采用了多任务深度学习框架,旨在优化超声图像的去噪和分割任务。PADS-Net 采用了包括平均绝对误差、平均平方误差和 Dice 损失在内的复合损失函数,以有效捕捉图像细节。PADS-Net 还整合了放射组学技术,通过利用超声图像中的高通量特征来诊断脊髓灰质炎。利用同侧和对侧图像中蝴蝶状中脑区域的两个 "翅膀",设计了一个四分支集合诊断模型,以提高帕金森病诊断的准确性。实验结果表明,PADS-Net 不仅降低了斑点噪声,实现了 16.90 的边缘噪声比,而且中脑分割的 Dice 系数达到了 0.91。最终,PADS-Net 在诊断帕金森病时的接收者操作特征曲线下面积高达 0.87。我们的 PADS-Net 在经颅超声图像去噪和分割方面表现出色,为准确评估帕金森病提供了潜在的临床解决方案。
{"title":"PADS-Net: GAN-based radiomics using multi-task network of denoising and segmentation for ultrasonic diagnosis of Parkinson disease","authors":"Yiwen Shen ,&nbsp;Li Chen ,&nbsp;Jieyi Liu ,&nbsp;Haobo Chen ,&nbsp;Changyan Wang ,&nbsp;Hong Ding ,&nbsp;Qi Zhang","doi":"10.1016/j.compmedimag.2024.102490","DOIUrl":"10.1016/j.compmedimag.2024.102490","url":null,"abstract":"<div><div>Parkinson disease (PD) is a prevalent neurodegenerative disorder, and its accurate diagnosis is crucial for timely intervention. We propose the <em>PA</em>rkinson disease <em>D</em>enoising and <em>S</em>egmentation <em>Net</em>work (PADS-Net), to simultaneously denoise and segment transcranial ultrasound images of midbrain for accurate PD diagnosis. The PADS-Net is built upon generative adversarial networks and incorporates a multi-task deep learning framework aimed at optimizing the tasks of denoising and segmentation for ultrasound images. A composite loss function including the mean absolute error, the mean squared error and the Dice loss, is adopted in the PADS-Net to effectively capture image details. The PADS-Net also integrates radiomics techniques for PD diagnosis by exploiting high-throughput features from ultrasound images. A four-branch ensemble diagnostic model is designed by utilizing two “wings” of the butterfly-shaped midbrain regions on both ipsilateral and contralateral images to enhance the accuracy of PD diagnosis. Experimental results demonstrate that the PADS-Net not only reduced speckle noise, achieving the edge-to-noise ratio of 16.90, but also attained a Dice coefficient of 0.91 for midbrain segmentation. The PADS-Net finally achieved an area under the receiver operating characteristic curve as high as 0.87 for diagnosis of PD. Our PADS-Net excels in transcranial ultrasound image denoising and segmentation and offers a potential clinical solution to accurate PD assessment.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102490"},"PeriodicalIF":5.4,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142985555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Equilibrium Unfolding Learning for Noise Estimation and Removal in Optical Molecular Imaging 基于深度平衡展开学习的光学分子成像噪声估计与去除。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-08 DOI: 10.1016/j.compmedimag.2025.102492
Lidan Fu , Lingbing Li , Binchun Lu , Xiaoyong Guo , Xiaojing Shi , Jie Tian , Zhenhua Hu
In clinical optical molecular imaging, the need for real-time high frame rates and low excitation doses to ensure patient safety inherently increases susceptibility to detection noise. Faced with the challenge of image degradation caused by severe noise, image denoising is essential for mitigating the trade-off between acquisition cost and image quality. However, prevailing deep learning methods exhibit uncontrollable and suboptimal performance with limited interpretability, primarily due to neglecting underlying physical model and frequency information. In this work, we introduce an end-to-end model-driven Deep Equilibrium Unfolding Mamba (DEQ-UMamba) that integrates proximal gradient descent technique and learnt spatial-frequency characteristics to decouple complex noise structures into statistical distributions, enabling effective noise estimation and suppression in fluorescent images. Moreover, to address the computational limitations of unfolding networks, DEQ-UMamba trains an implicit mapping by directly differentiating the equilibrium point of the convergent solution, thereby ensuring stability and avoiding non-convergent behavior. With each network module aligned to a corresponding operation in the iterative optimization process, the proposed method achieves clear structural interpretability and strong performance. Comprehensive experiments conducted on both clinical and in vivo datasets demonstrate that DEQ-UMamba outperforms current state-of-the-art alternatives while utilizing fewer parameters, facilitating the advancement of cost-effective and high-quality clinical molecular imaging.
在临床光学分子成像中,需要实时高帧率和低激发剂量来确保患者安全,这本质上增加了对检测噪声的敏感性。面对严重噪声导致图像退化的挑战,图像去噪是缓解采集成本和图像质量之间权衡的关键。然而,目前流行的深度学习方法表现出不可控和次优性能,可解释性有限,主要是由于忽略了底层物理模型和频率信息。在这项工作中,我们介绍了一个端到端模型驱动的深度平衡展开曼巴(DEQ-UMamba),它集成了近端梯度下降技术和学习的空间频率特征,将复杂的噪声结构解耦到统计分布中,从而能够有效地估计和抑制荧光图像中的噪声。此外,为了解决展开网络的计算限制,DEQ-UMamba通过直接微分收敛解的平衡点来训练隐式映射,从而确保稳定性并避免非收敛行为。在迭代优化过程中,每个网络模块对应一个操作,具有清晰的结构可解释性和较强的性能。在临床和体内数据集上进行的综合实验表明,DEQ-UMamba优于目前最先进的替代方案,同时使用更少的参数,促进了成本效益和高质量临床分子成像的进步。
{"title":"Deep Equilibrium Unfolding Learning for Noise Estimation and Removal in Optical Molecular Imaging","authors":"Lidan Fu ,&nbsp;Lingbing Li ,&nbsp;Binchun Lu ,&nbsp;Xiaoyong Guo ,&nbsp;Xiaojing Shi ,&nbsp;Jie Tian ,&nbsp;Zhenhua Hu","doi":"10.1016/j.compmedimag.2025.102492","DOIUrl":"10.1016/j.compmedimag.2025.102492","url":null,"abstract":"<div><div>In clinical optical molecular imaging, the need for real-time high frame rates and low excitation doses to ensure patient safety inherently increases susceptibility to detection noise. Faced with the challenge of image degradation caused by severe noise, image denoising is essential for mitigating the trade-off between acquisition cost and image quality. However, prevailing deep learning methods exhibit uncontrollable and suboptimal performance with limited interpretability, primarily due to neglecting underlying physical model and frequency information. In this work, we introduce an end-to-end model-driven Deep Equilibrium Unfolding Mamba (DEQ-UMamba) that integrates proximal gradient descent technique and learnt spatial-frequency characteristics to decouple complex noise structures into statistical distributions, enabling effective noise estimation and suppression in fluorescent images. Moreover, to address the computational limitations of unfolding networks, DEQ-UMamba trains an implicit mapping by directly differentiating the equilibrium point of the convergent solution, thereby ensuring stability and avoiding non-convergent behavior. With each network module aligned to a corresponding operation in the iterative optimization process, the proposed method achieves clear structural interpretability and strong performance. Comprehensive experiments conducted on both clinical and in vivo datasets demonstrate that DEQ-UMamba outperforms current state-of-the-art alternatives while utilizing fewer parameters, facilitating the advancement of cost-effective and high-quality clinical molecular imaging.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102492"},"PeriodicalIF":5.4,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NURBS curve shape prior-guided multiscale attention network for automatic segmentation of the inferior alveolar nerve NURBS曲线形状先验引导多尺度注意网络下牙槽神经自动分割。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-07 DOI: 10.1016/j.compmedimag.2024.102485
Shuanglin Jiang , Jiangchang Xu , Wenyin Wang , Baoxin Tao , Yiqun Wu , Xiaojun Chen
Accurate segmentation of the inferior alveolar nerve (IAN) within Cone-Beam Computed Tomography (CBCT) images is critical for the precise planning of oral and maxillofacial surgeries, especially to avoid IAN damage. Existing methods often fail due to the low contrast of the IAN and the presence of artifacts, which can cause segmentation discontinuities. To address these challenges, this paper proposes a novel approach that employs Non-Uniform Rational B-Spline (NURBS) curve shape priors into a multiscale attention network for the automatic segmentation of the IAN. Firstly, an automatic method for generating non-uniform rational B-spline (NURBS) shape prior is proposed and introduced into the segmentation network, which significantly enhancing the continuity and accuracy of IAN segmentation. Then a multiscale attention segmentation network, incorporating a dilation selective attention module is developed, to improve the network’s feature extraction capacity. The proposed approach is validated on both in-house and public datasets, showcasing superior performance compared to established benchmarks, achieving 80.29±11.04% dice coefficient (Dice) and 68.14±12.06% intersection of union (IoU), the 95% Hausdorff distance (95HD) reaches 1.61±6.14 mm and mean surface distance (MSD) reaches 0.64±2.16 mm on private dataset. On public dataset, the Dice reaches 80.69±4.93%, IoU reaches 67.86±6.73%, 95HD reaches 1.04±0.95 mm, and MSD reaches 0.42±0.34 mm. Compared to state-of-the-art networks, the proposed approach out-performs in both voxel accuracy and surface distance. It offers significant potential to improve doctors’ efficiency in segmentation tasks and holds promise for applications in dental surgery planning. The source codes are available at https://github.com/SJTUjsl/NURBS_IAN.git.
圆锥束ct (Cone-Beam Computed Tomography, CBCT)图像中准确分割下牙槽神经(IAN)对于口腔颌面外科手术的精确规划,尤其是避免下颌牙槽神经损伤至关重要。现有的方法往往失败,由于低对比度的IAN和存在的伪影,这可能会导致分割不连续。为了解决这些问题,本文提出了一种新的方法,将非均匀有理b样条(NURBS)曲线形状先验引入到多尺度注意力网络中,用于人工神经网络的自动分割。首先,提出了一种非均匀有理b样条(NURBS)形状先验的自动生成方法,并将其引入到分割网络中,显著提高了IAN分割的连续性和准确性;为了提高网络的特征提取能力,设计了一种包含扩张选择性注意模块的多尺度注意力分割网络。在内部和公共数据集上验证了该方法,与现有基准相比,该方法的性能优越,在私有数据集上实现了80.29±11.04%的骰子系数(dice)和68.14±12.06%的联合交叉点(IoU), 95% Hausdorff距离(95HD)达到1.61±6.14 mm,平均表面距离(MSD)达到0.64±2.16 mm。在公开数据集上,Dice达到80.69±4.93%,IoU达到67.86±6.73%,95HD达到1.04±0.95 mm, MSD达到0.42±0.34 mm。与最先进的网络相比,该方法在体素精度和表面距离方面都优于最先进的网络。它为提高医生在分割任务中的效率提供了巨大的潜力,并有望在牙科手术计划中应用。源代码可从https://github.com/SJTUjsl/NURBS_IAN.git获得。
{"title":"NURBS curve shape prior-guided multiscale attention network for automatic segmentation of the inferior alveolar nerve","authors":"Shuanglin Jiang ,&nbsp;Jiangchang Xu ,&nbsp;Wenyin Wang ,&nbsp;Baoxin Tao ,&nbsp;Yiqun Wu ,&nbsp;Xiaojun Chen","doi":"10.1016/j.compmedimag.2024.102485","DOIUrl":"10.1016/j.compmedimag.2024.102485","url":null,"abstract":"<div><div>Accurate segmentation of the inferior alveolar nerve (IAN) within Cone-Beam Computed Tomography (CBCT) images is critical for the precise planning of oral and maxillofacial surgeries, especially to avoid IAN damage. Existing methods often fail due to the low contrast of the IAN and the presence of artifacts, which can cause segmentation discontinuities. To address these challenges, this paper proposes a novel approach that employs Non-Uniform Rational B-Spline (NURBS) curve shape priors into a multiscale attention network for the automatic segmentation of the IAN. Firstly, an automatic method for generating non-uniform rational B-spline (NURBS) shape prior is proposed and introduced into the segmentation network, which significantly enhancing the continuity and accuracy of IAN segmentation. Then a multiscale attention segmentation network, incorporating a dilation selective attention module is developed, to improve the network’s feature extraction capacity. The proposed approach is validated on both in-house and public datasets, showcasing superior performance compared to established benchmarks, achieving 80.29±11.04% dice coefficient (Dice) and 68.14±12.06% intersection of union (IoU), the 95% Hausdorff distance (95HD) reaches 1.61±6.14 mm and mean surface distance (MSD) reaches 0.64±2.16 mm on private dataset. On public dataset, the Dice reaches 80.69±4.93%, IoU reaches 67.86±6.73%, 95HD reaches 1.04±0.95 mm, and MSD reaches 0.42±0.34 mm. Compared to state-of-the-art networks, the proposed approach out-performs in both voxel accuracy and surface distance. It offers significant potential to improve doctors’ efficiency in segmentation tasks and holds promise for applications in dental surgery planning. The source codes are available at <span><span>https://github.com/SJTUjsl/NURBS_IAN.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102485"},"PeriodicalIF":5.4,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142967409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computerized Medical Imaging and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1