首页 > 最新文献

IEEE Journal of Biomedical and Health Informatics最新文献

英文 中文
FastCRL: A Fast Network With Adaptive Fourier Transform and Offset Prediction for Fetal Crown-Rump Length Measurement and Position Estimation in Ultrasound Images.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-06 DOI: 10.1109/JBHI.2025.3539391
Jiatao Liu, Ying Tan, Chunlian Wang, Kenli Li, Guanghua Tan, Chubo Liu

Fetal crown-rump length (CRL) is one of the most accurate method for estimating gestational age in early pregnancy. Typically, the process of manual CRL measurement by physicians is cumbersome, prone to errors due to fetal position, and susceptible to inter-observer variability. To provide an accurate, real-time, and reliable fetal CRL measurement solution, we propose FastCRL that utilizes key landmarks detection for efficient CRL measurements and fetal position estimation. Specifically, fast and lightweight network blocks are employed for both the encoder and decoder. By outputting low-resolution heatmaps and axial offset maps of key landmarks, we achieve a balance between high accuracy and fast inference speed. A novel Lightweight Adaptive Fourier Transform (LAFT) module is introduced to globally filter noise in ultrasound images and enhance the features required for landmark prediction. Additionally, the challenge of evaluating fetal position flexion and extension is effectively addressed by analyzing the angles between key landmarks on the fetal head, buttocks, and neck. The experimental results on our dataset indicate that our method for determining fetal position is both objective and efficient. FastCRL achieves a performance level consistent with the average human expert. In terms of measuring CRL, FastCRL achieved an error rate of less than 3% in 99.1% of measurements with 32 ms latency, significantly outperforming other baselines and demonstrating substantial potential for clinical application.

胎儿冠臀长(CRL)是估计孕早期胎龄最准确的方法之一。通常情况下,医生手动测量胎儿冠臀长的过程非常繁琐,容易因胎儿位置而产生误差,而且容易受到观察者之间差异的影响。为了提供准确、实时和可靠的胎儿 CRL 测量解决方案,我们提出了 FastCRL,它利用关键地标检测来实现高效的 CRL 测量和胎位估计。具体来说,编码器和解码器都采用了快速、轻量级的网络模块。通过输出关键地标的低分辨率热图和轴向偏移图,我们实现了高精度和快速推理之间的平衡。我们还引入了一个新颖的轻量级自适应傅立叶变换(LAFT)模块,对超声图像中的噪声进行全局过滤,并增强地标预测所需的特征。此外,通过分析胎儿头部、臀部和颈部关键地标之间的角度,有效地解决了评估胎位屈伸的难题。数据集的实验结果表明,我们的胎位确定方法既客观又高效。FastCRL 的性能与人类专家的平均水平相当。在测量CRL方面,FastCRL在99.1%的测量中误差率小于3%,延迟时间为32毫秒,明显优于其他基线方法,显示了在临床应用方面的巨大潜力。
{"title":"FastCRL: A Fast Network With Adaptive Fourier Transform and Offset Prediction for Fetal Crown-Rump Length Measurement and Position Estimation in Ultrasound Images.","authors":"Jiatao Liu, Ying Tan, Chunlian Wang, Kenli Li, Guanghua Tan, Chubo Liu","doi":"10.1109/JBHI.2025.3539391","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3539391","url":null,"abstract":"<p><p>Fetal crown-rump length (CRL) is one of the most accurate method for estimating gestational age in early pregnancy. Typically, the process of manual CRL measurement by physicians is cumbersome, prone to errors due to fetal position, and susceptible to inter-observer variability. To provide an accurate, real-time, and reliable fetal CRL measurement solution, we propose FastCRL that utilizes key landmarks detection for efficient CRL measurements and fetal position estimation. Specifically, fast and lightweight network blocks are employed for both the encoder and decoder. By outputting low-resolution heatmaps and axial offset maps of key landmarks, we achieve a balance between high accuracy and fast inference speed. A novel Lightweight Adaptive Fourier Transform (LAFT) module is introduced to globally filter noise in ultrasound images and enhance the features required for landmark prediction. Additionally, the challenge of evaluating fetal position flexion and extension is effectively addressed by analyzing the angles between key landmarks on the fetal head, buttocks, and neck. The experimental results on our dataset indicate that our method for determining fetal position is both objective and efficient. FastCRL achieves a performance level consistent with the average human expert. In terms of measuring CRL, FastCRL achieved an error rate of less than 3% in 99.1% of measurements with 32 ms latency, significantly outperforming other baselines and demonstrating substantial potential for clinical application.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-sequence MRI-based hierarchical expert diagnostic method for the molecular subtype of breast cancer.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-05 DOI: 10.1109/JBHI.2024.3486182
Hongyu Wang, Yanfang Hao, Pingping Wang, Erjuan Wang, Songtao Ding, Baoying Chen

Breast cancer is one of the cancers of deep concern worldwide, and the molecular subtype of breast cancer is significant for patients' treatment selection, and prognosis judgment. The application of multi-sequence MRI technology provides a new non-invasive companion diagnostic method for molecular subtypes of breast cancer, which can more accurately assess the vascular status of tumors and reveal fine structures. However, providing interpretable classification results remains a challenge. Recently, although many convolutional neural network (CNN) methods and fine-grained classification methods based on MRI inputs have been proposed. However, most of these methods operate in a 'black-box' without a detailed explanation of the intermediate processes, resulting in a lack of interpretability of the breast cancer classification process. To address this problem, our study proposes a multi-sequence MRI-based hierarchical expert diagnostic method for the molecular subtype of breast cancer. With the strong differentiation module, this method first identifies enhanced features in breast tumors, ensuring that the subsequent classification process is precisely focused on the lesion features. In addition, inspired by the codiagnosis of multiple experts in clinical diagnosis, we set up a mechanism of collaborative diagnostic corrective learning by hierarchical experts to provide an interpretable classification process. Compared with previous studies, the framework learns features with a strong distinguishing ability for breast tumor classification. Specifically, multiple experts corrected each other's learning to give more accurate and interpretable classification results, significantly improving clinical diagnosis's practical value. We conducted extensive experiments on a breast dataset and compared it quantitatively with other methods, and we achieved the best performance in terms of accuracy (0.889) and F1 Score (0.893).We make the code public on GitHub: https://github.com/yanfangHao/HED.

{"title":"A multi-sequence MRI-based hierarchical expert diagnostic method for the molecular subtype of breast cancer.","authors":"Hongyu Wang, Yanfang Hao, Pingping Wang, Erjuan Wang, Songtao Ding, Baoying Chen","doi":"10.1109/JBHI.2024.3486182","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3486182","url":null,"abstract":"<p><p>Breast cancer is one of the cancers of deep concern worldwide, and the molecular subtype of breast cancer is significant for patients' treatment selection, and prognosis judgment. The application of multi-sequence MRI technology provides a new non-invasive companion diagnostic method for molecular subtypes of breast cancer, which can more accurately assess the vascular status of tumors and reveal fine structures. However, providing interpretable classification results remains a challenge. Recently, although many convolutional neural network (CNN) methods and fine-grained classification methods based on MRI inputs have been proposed. However, most of these methods operate in a 'black-box' without a detailed explanation of the intermediate processes, resulting in a lack of interpretability of the breast cancer classification process. To address this problem, our study proposes a multi-sequence MRI-based hierarchical expert diagnostic method for the molecular subtype of breast cancer. With the strong differentiation module, this method first identifies enhanced features in breast tumors, ensuring that the subsequent classification process is precisely focused on the lesion features. In addition, inspired by the codiagnosis of multiple experts in clinical diagnosis, we set up a mechanism of collaborative diagnostic corrective learning by hierarchical experts to provide an interpretable classification process. Compared with previous studies, the framework learns features with a strong distinguishing ability for breast tumor classification. Specifically, multiple experts corrected each other's learning to give more accurate and interpretable classification results, significantly improving clinical diagnosis's practical value. We conducted extensive experiments on a breast dataset and compared it quantitatively with other methods, and we achieved the best performance in terms of accuracy (0.889) and F1 Score (0.893).We make the code public on GitHub: https://github.com/yanfangHao/HED.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CrossConvPyramid: Deep Multimodal Fusion for Epileptic Magnetoencephalography Spike Detection.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-04 DOI: 10.1109/JBHI.2025.3538582
Liang Zhang, Shurong Sheng, Xiongfei Wang, Jia-Hong Gao, Yi Sun, Kuntao Xiao, Wanli Yang, Pengfei Teng, Guoming Luan, Zhao Lv

Magnetoencephalography (MEG) is a vital non-invasive tool for epilepsy analysis, as it captures high-resolution signals that reflect changes in brain activity over time. The automated detection of epileptic spikes within these signals can significantly reduce the labor and time required for manual annotation of MEG recording data, thereby aiding clinicians in identifying epileptogenic foci and evaluating treatment prognosis. Research in this domain often utilizes the raw, multi-channel signals from MEG scans for spike detection, commonly neglecting the multi-channel spiking patterns from spatially adjacent channels. Moreover, epileptic spikes share considerable morphological similarities with artifact signals within the recordings, posing a challenge for models to differentiate between the two. In this paper, we introduce a multimodal fusion framework that addresses these two challenges collectively. Instead of relying solely on the signal recordings, our framework also mines knowledge from their corresponding topography-map images, which encapsulate the spatial context and amplitude distribution of the input signals. To facilitate more effective data fusion, we present a novel multimodal feature fusion technique called CrossConvPyramid, built upon a convolutional pyramid architecture augmented by an attention mechanism. It initially employs cross-attention and a convolutional pyramid to encode inter-modal correlations within the intermediate features extracted by individual unimodal networks. Subsequently, it utilizes a self-attention mechanism to refine and select the most salient features from both inter-modal and unimodal features, specifically tailored for the spike classification task. Our method achieved the average F1 scores of 92.88% and 95.23% across two distinct real-world MEG datasets from separate centers, respectively outperforming the current state-of-the-art by 2.31% and 0.88%. We plan to release the code on GitHub later.

脑磁图(MEG)是一种重要的非侵入性癫痫分析工具,因为它能捕捉到反映大脑活动随时间变化的高分辨率信号。自动检测这些信号中的癫痫尖峰可大大减少人工标注脑磁图记录数据所需的人力和时间,从而帮助临床医生识别致痫灶和评估治疗预后。该领域的研究通常利用 MEG 扫描的原始多通道信号进行尖峰检测,通常会忽略空间上相邻通道的多通道尖峰模式。此外,癫痫尖峰与记录中的伪信号在形态上有很大的相似性,这给区分两者的模型带来了挑战。在本文中,我们引入了一个多模态融合框架,以共同应对这两个挑战。我们的框架不仅依赖于信号记录,还从相应的地形图图像中挖掘知识,这些图像囊括了输入信号的空间背景和振幅分布。为了促进更有效的数据融合,我们提出了一种名为 CrossConvPyramid 的新型多模态特征融合技术,该技术基于卷积金字塔架构,并辅以注意力机制。它首先利用交叉注意和卷积金字塔来编码单个单模态网络提取的中间特征中的模态间相关性。随后,它利用自我注意机制从模态间特征和单模态特征中提炼和选择最突出的特征,专门用于尖峰分类任务。我们的方法在来自不同中心的两个不同的真实世界 MEG 数据集上取得了 92.88% 和 95.23% 的平均 F1 分数,分别比目前最先进的方法高出 2.31% 和 0.88%。我们计划稍后在 GitHub 上发布代码。
{"title":"CrossConvPyramid: Deep Multimodal Fusion for Epileptic Magnetoencephalography Spike Detection.","authors":"Liang Zhang, Shurong Sheng, Xiongfei Wang, Jia-Hong Gao, Yi Sun, Kuntao Xiao, Wanli Yang, Pengfei Teng, Guoming Luan, Zhao Lv","doi":"10.1109/JBHI.2025.3538582","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3538582","url":null,"abstract":"<p><p>Magnetoencephalography (MEG) is a vital non-invasive tool for epilepsy analysis, as it captures high-resolution signals that reflect changes in brain activity over time. The automated detection of epileptic spikes within these signals can significantly reduce the labor and time required for manual annotation of MEG recording data, thereby aiding clinicians in identifying epileptogenic foci and evaluating treatment prognosis. Research in this domain often utilizes the raw, multi-channel signals from MEG scans for spike detection, commonly neglecting the multi-channel spiking patterns from spatially adjacent channels. Moreover, epileptic spikes share considerable morphological similarities with artifact signals within the recordings, posing a challenge for models to differentiate between the two. In this paper, we introduce a multimodal fusion framework that addresses these two challenges collectively. Instead of relying solely on the signal recordings, our framework also mines knowledge from their corresponding topography-map images, which encapsulate the spatial context and amplitude distribution of the input signals. To facilitate more effective data fusion, we present a novel multimodal feature fusion technique called CrossConvPyramid, built upon a convolutional pyramid architecture augmented by an attention mechanism. It initially employs cross-attention and a convolutional pyramid to encode inter-modal correlations within the intermediate features extracted by individual unimodal networks. Subsequently, it utilizes a self-attention mechanism to refine and select the most salient features from both inter-modal and unimodal features, specifically tailored for the spike classification task. Our method achieved the average F1 scores of 92.88% and 95.23% across two distinct real-world MEG datasets from separate centers, respectively outperforming the current state-of-the-art by 2.31% and 0.88%. We plan to release the code on GitHub later.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GFLearn: Generalized Feature Learning for Drug-Target Binding Affinity Prediction.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-04 DOI: 10.1109/JBHI.2025.3538497
Zibo Huang, Xinrui Weng, Le Ou-Yang

Predicting drug-target binding affinity is critical for drug discovery, as it helps identify promising drug candidates and predict their effectiveness. Recent advancements in deep learning have made significant progress in tackling this task. However, existing methods heavily rely on training data, and their performance is often limited when predicting binding affinities for new drugs and targets. To address this challenge, we propose a novel Generalized Feature Learning (GFLearn) model for drug-target binding affinity prediction. By integrating Graph Neural Networks (GNNs) with a self-supervised invariant feature learning module, our GFLearn model can extract robust and highly generalizable features from both drugs and targets, significantly enhancing prediction performance. This innovation enables the model to effectively predict binding affinities for previously unseen drugs or targets, while also mitigates the common issue of prediction performance degrading due to shifts in data distribution. Extensive experiments were conducted on two diverse datasets across three challenging scenarios: new drugs, new targets, and combinations of both. Comparisons with state-of-the-art methods demonstrated that our GFLearn model consistently outperformed others, showcasing its robustness across various prediction tasks. Additionally, cross-dataset evaluations and noise perturbation experiments further validated the model's generalizability across different data distributions. Case studies on two drug-target pairs, Canertinib-PIK3C2G and MLN8054-FLT1, provided further evidence of GFLearn's ability to make accurate binding affinity predictions, offering valuable insights for drug screening and repurposing efforts.

预测药物与靶点的结合亲和力对药物发现至关重要,因为这有助于确定有前途的候选药物并预测其有效性。深度学习领域的最新进展在解决这一任务方面取得了重大进展。然而,现有方法严重依赖训练数据,在预测新药和新靶点的结合亲和力时,其性能往往受到限制。为了应对这一挑战,我们提出了一种用于药物与靶点结合亲和力预测的新型广义特征学习(GFLearn)模型。通过将图神经网络(GNN)与自监督不变特征学习模块相结合,我们的 GFLearn 模型可以从药物和靶标中提取稳健且高度泛化的特征,从而显著提高预测性能。这一创新使该模型能够有效预测以前未见过的药物或靶标的结合亲和力,同时也缓解了因数据分布变化而导致预测性能下降的常见问题。我们在两个不同的数据集上进行了广泛的实验,涉及三种具有挑战性的情况:新药、新靶点以及两者的组合。与最先进方法的比较表明,我们的 GFLearn 模型始终优于其他方法,展示了它在各种预测任务中的鲁棒性。此外,跨数据集评估和噪声扰动实验进一步验证了该模型在不同数据分布中的通用性。对Canertinib-PIK3C2G和MLN8054-FLT1这两对药物-靶点的案例研究进一步证明了GFLearn准确预测结合亲和力的能力,为药物筛选和再利用工作提供了宝贵的见解。
{"title":"GFLearn: Generalized Feature Learning for Drug-Target Binding Affinity Prediction.","authors":"Zibo Huang, Xinrui Weng, Le Ou-Yang","doi":"10.1109/JBHI.2025.3538497","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3538497","url":null,"abstract":"<p><p>Predicting drug-target binding affinity is critical for drug discovery, as it helps identify promising drug candidates and predict their effectiveness. Recent advancements in deep learning have made significant progress in tackling this task. However, existing methods heavily rely on training data, and their performance is often limited when predicting binding affinities for new drugs and targets. To address this challenge, we propose a novel Generalized Feature Learning (GFLearn) model for drug-target binding affinity prediction. By integrating Graph Neural Networks (GNNs) with a self-supervised invariant feature learning module, our GFLearn model can extract robust and highly generalizable features from both drugs and targets, significantly enhancing prediction performance. This innovation enables the model to effectively predict binding affinities for previously unseen drugs or targets, while also mitigates the common issue of prediction performance degrading due to shifts in data distribution. Extensive experiments were conducted on two diverse datasets across three challenging scenarios: new drugs, new targets, and combinations of both. Comparisons with state-of-the-art methods demonstrated that our GFLearn model consistently outperformed others, showcasing its robustness across various prediction tasks. Additionally, cross-dataset evaluations and noise perturbation experiments further validated the model's generalizability across different data distributions. Case studies on two drug-target pairs, Canertinib-PIK3C2G and MLN8054-FLT1, provided further evidence of GFLearn's ability to make accurate binding affinity predictions, offering valuable insights for drug screening and repurposing efforts.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PITCH: A Pathway-induced Prioritization of Personalized Cancer Driver Genes based on Higher-order Interactions.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-04 DOI: 10.1109/JBHI.2025.3538536
Yuhe Wang, Suoqin Jin, Xiufen Zou

Cancer is driven by specific mutations known as cancer driver genes, whose identification is crucial for advancing cancer therapy. Although many computational methods have been proposed with this purpose, most provide a single driver gene list ignoring the high heterogeneity of drivers across patients in cohort. Besides, they often fail to capture the higher-order interactions among genes at the patient level. Here we introduce a novel method PITCH to prioritize personalized cancer driver genes by assessing the higher-order propagation dynamics among genes. PITCH constructs a patient-specific hypergraph model that represents higher-order interactions well-characterized in signaling pathways, enabling a more comprehensive assessment of gene influence in cancer development. PITCH does not require paired case-control data, simplifying its application in clinical practice. We evaluated our approach using data from four different types of cancers, demonstrating its superior performance in identifying cancer driver genes compared to existing methods. Importantly, PITCH is shown to identify both common and rare drivers. The results were validated against well-studied cancer gene databases, confirming the accuracy of the identified drivers. Additionally, most of PITCH-identified personalized driver genes were actionable and druggable for most patients, offering significant potential for guiding personalized treatment strategies. Our approach represents a significant advancement in the field of cancer driver genes discovery, providing a powerful tool for the precise identification of therapeutic targets in cancer research.

{"title":"PITCH: A Pathway-induced Prioritization of Personalized Cancer Driver Genes based on Higher-order Interactions.","authors":"Yuhe Wang, Suoqin Jin, Xiufen Zou","doi":"10.1109/JBHI.2025.3538536","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3538536","url":null,"abstract":"<p><p>Cancer is driven by specific mutations known as cancer driver genes, whose identification is crucial for advancing cancer therapy. Although many computational methods have been proposed with this purpose, most provide a single driver gene list ignoring the high heterogeneity of drivers across patients in cohort. Besides, they often fail to capture the higher-order interactions among genes at the patient level. Here we introduce a novel method PITCH to prioritize personalized cancer driver genes by assessing the higher-order propagation dynamics among genes. PITCH constructs a patient-specific hypergraph model that represents higher-order interactions well-characterized in signaling pathways, enabling a more comprehensive assessment of gene influence in cancer development. PITCH does not require paired case-control data, simplifying its application in clinical practice. We evaluated our approach using data from four different types of cancers, demonstrating its superior performance in identifying cancer driver genes compared to existing methods. Importantly, PITCH is shown to identify both common and rare drivers. The results were validated against well-studied cancer gene databases, confirming the accuracy of the identified drivers. Additionally, most of PITCH-identified personalized driver genes were actionable and druggable for most patients, offering significant potential for guiding personalized treatment strategies. Our approach represents a significant advancement in the field of cancer driver genes discovery, providing a powerful tool for the precise identification of therapeutic targets in cancer research.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Graph Transformer for Brain Disorder Diagnosis.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-03 DOI: 10.1109/JBHI.2025.3538040
Ahsan Shehzad, Dongyu Zhang, Shuo Yu, Shagufta Abid, Feng Xia

Dynamic brain networks play a pivotal role in diagnosing brain disorders by capturing temporal changes in brain activity and connectivity. Previous methods often rely on sliding-window approaches for constructing these networks using fMRI data. However, these methods face two key limitations: a fixed temporal length that inadequately captures brain activity dynamics and a global spatial scope that introduces noise and reduces sensitivity to localized dysfunctions. These challenges can lead to inaccurate brain network representations and potential misdiagnoses.To address these challenges, we propose BrainDGT, a dynamic Graph Transformer model designed to enhance the construction and analysis of dynamic brain networks for more accurate diagnosis of brain disorders. BrainDGT leverages adaptive brain states by deconvolving the Hemodynamic Response Function (HRF) within individual functional brain modules to generate dynamic graphs, addressing the limitations of fixed temporal length and global spatial scope. The model learns spatio-temporal local features through attention mechanisms within these graphs and captures global interactions across modules using adaptive fusion. This dual-level integration enhances the model's ability to analyze complex brain connectivity patterns. We validate BrainDGT's effectiveness through classification experiments on three fMRI datasets (ADNI, PPMI, and ABIDE), where it outperforms state-of-the-art methods. By enabling adaptive, localized analysis of dynamic brain networks, BrainDGT advances neuroimaging and supports the development of more precise diagnostic and treatment strategies in biomedical research.

动态大脑网络通过捕捉大脑活动和连接性的时间变化,在诊断大脑疾病方面发挥着举足轻重的作用。以往的方法通常依赖于滑动窗口方法,利用 fMRI 数据构建这些网络。然而,这些方法面临着两个关键的局限性:固定的时间长度无法充分捕捉大脑活动的动态变化;全局空间范围会引入噪声,降低对局部功能障碍的敏感性。为了应对这些挑战,我们提出了 BrainDGT,这是一种动态图变换器模型,旨在加强动态脑网络的构建和分析,从而更准确地诊断脑部疾病。BrainDGT 通过对单个脑功能模块内的血流动力学响应函数(HRF)进行解卷积,利用自适应脑状态生成动态图,解决了固定时间长度和全局空间范围的限制。该模型通过这些图中的注意机制学习时空局部特征,并利用自适应融合捕捉跨模块的全局交互。这种双层整合增强了模型分析复杂大脑连接模式的能力。我们在三个 fMRI 数据集(ADNI、PPMI 和 ABIDE)上进行了分类实验,验证了 BrainDGT 的有效性。通过对动态大脑网络进行自适应的局部分析,BrainDGT 推动了神经成像技术的发展,并为生物医学研究中更精确的诊断和治疗策略的开发提供了支持。
{"title":"Dynamic Graph Transformer for Brain Disorder Diagnosis.","authors":"Ahsan Shehzad, Dongyu Zhang, Shuo Yu, Shagufta Abid, Feng Xia","doi":"10.1109/JBHI.2025.3538040","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3538040","url":null,"abstract":"<p><p>Dynamic brain networks play a pivotal role in diagnosing brain disorders by capturing temporal changes in brain activity and connectivity. Previous methods often rely on sliding-window approaches for constructing these networks using fMRI data. However, these methods face two key limitations: a fixed temporal length that inadequately captures brain activity dynamics and a global spatial scope that introduces noise and reduces sensitivity to localized dysfunctions. These challenges can lead to inaccurate brain network representations and potential misdiagnoses.To address these challenges, we propose BrainDGT, a dynamic Graph Transformer model designed to enhance the construction and analysis of dynamic brain networks for more accurate diagnosis of brain disorders. BrainDGT leverages adaptive brain states by deconvolving the Hemodynamic Response Function (HRF) within individual functional brain modules to generate dynamic graphs, addressing the limitations of fixed temporal length and global spatial scope. The model learns spatio-temporal local features through attention mechanisms within these graphs and captures global interactions across modules using adaptive fusion. This dual-level integration enhances the model's ability to analyze complex brain connectivity patterns. We validate BrainDGT's effectiveness through classification experiments on three fMRI datasets (ADNI, PPMI, and ABIDE), where it outperforms state-of-the-art methods. By enabling adaptive, localized analysis of dynamic brain networks, BrainDGT advances neuroimaging and supports the development of more precise diagnostic and treatment strategies in biomedical research.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction of Clinical Response of Transcranial Magnetic Stimulation Treatment for Major Depressive Disorder Using Hyperdimensional Computing.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-31 DOI: 10.1109/JBHI.2025.3537757
Lulu Ge, Aaron N McInnes, Alik S Widge, Keshab K Parhi

Cognitive control dysregulation is nearly universal across disorders, including major depressive disorder (MDD). Achieving comparable response rates to medication, the transcranial magnetic stimulation (TMS) mechanism and its effect on cognitive control have not been well understood yet. This paper investigates the predictive capability of the clinical response to TMS treatment using 34 cognitive variables measured from TMS treatment of 22 MDD subjects over an eight-week period. We employ a novel brain-inspired computing paradigm, hyperdimensional computing (HDC), to classify the effectiveness of TMS using leave-one-subject-out cross-validation (LOSOCV). Four performance metrics-accuracy, sensitivity, specificity and AUC-are used, with AUC being the primary metric. Experimental results reveal that: i). Although SVM outperforms HDC in terms of accuracy, HDC achieves an AUC of 0.82, surpassing SVM by 0.07. ii). The optimal performance for both classifiers is obtained with feature selection using SelectKBest. iii) Among the top features selected by SelectKBest for the two classifiers, ws_MedRT (median rate for the Websurf task) shows a more distinguishable distribution between clinical responses ("1") and no clinical responses ("0"). In conclusion, these results highlight the potential of HDC for predicting clinical responses to TMS and underscore the importance of feature selection in improving classification performance.

{"title":"Prediction of Clinical Response of Transcranial Magnetic Stimulation Treatment for Major Depressive Disorder Using Hyperdimensional Computing.","authors":"Lulu Ge, Aaron N McInnes, Alik S Widge, Keshab K Parhi","doi":"10.1109/JBHI.2025.3537757","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3537757","url":null,"abstract":"<p><p>Cognitive control dysregulation is nearly universal across disorders, including major depressive disorder (MDD). Achieving comparable response rates to medication, the transcranial magnetic stimulation (TMS) mechanism and its effect on cognitive control have not been well understood yet. This paper investigates the predictive capability of the clinical response to TMS treatment using 34 cognitive variables measured from TMS treatment of 22 MDD subjects over an eight-week period. We employ a novel brain-inspired computing paradigm, hyperdimensional computing (HDC), to classify the effectiveness of TMS using leave-one-subject-out cross-validation (LOSOCV). Four performance metrics-accuracy, sensitivity, specificity and AUC-are used, with AUC being the primary metric. Experimental results reveal that: i). Although SVM outperforms HDC in terms of accuracy, HDC achieves an AUC of 0.82, surpassing SVM by 0.07. ii). The optimal performance for both classifiers is obtained with feature selection using SelectKBest. iii) Among the top features selected by SelectKBest for the two classifiers, ws_MedRT (median rate for the Websurf task) shows a more distinguishable distribution between clinical responses (\"1\") and no clinical responses (\"0\"). In conclusion, these results highlight the potential of HDC for predicting clinical responses to TMS and underscore the importance of feature selection in improving classification performance.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RAFT-USENet: A Unified Network for Accurate Axial and Lateral Motion Estimation in Ultrasound Elastography Imaging.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-31 DOI: 10.1109/JBHI.2025.3536786
Sharmin Majumder, Md Tauhidul Islam, Raffaella Righetti

High-quality motion estimation is essential in ultrasound elastography (USE) for evaluating tissue mechanical properties and detecting abnormalities. Traditional methods, such as speckle tracking and regularized optimization, face challenges including noise, over-smoothing of displacements, and prolonged runtimes. Recent efforts have explored optical flow-based convolutional neural networks (CNNs). However, current approaches experience at least one of the following limitations: 1) reliance on tissue incompressibility assumption, which compromises data fidelity and can introduce large errors; 2) dependence on ground truth displacement data for supervised CNN methods; 3) use of a regularizer not aligned with tissue physics by relying only on first-order displacement derivatives; 4) use of a L2-norm regularizer that over-smoothes motion estimates; and 5) substantially large sampling size, increasing computational and memory demands, especially for classical methods. In this paper, we develop RAFT-USENet, a physics-informed, unsupervised neural network to estimate both axial and lateral displacements. We design RAFT-USENet by substantially modifying optical flow RAFT network to adapt it to high-frequency USE data. Extensive validation using simulation, phantom and in vivo data demonstrates that RAFT-USENet significantly improves motion estimation performance compared to recent classical and CNN methods. The normalized cross-correlation between pre- and warped post-deformation USE data using RAFT-USENet is estimated as 0.94, 0.88, and 0.82 in simulation, breast phantom and in vivo datasets, respectively, while corresponding comparative methods ranges were found 0.79-0.88, 0.76-0.85, and 0.69-0.81. Additionally, RAFT-USENet reduced computational time by 1.5-150 times compared to existing methods. These results suggest that RAFT-USENet may be a potentially useful reliable and accurate tool for clinical elasticity imaging applications.

{"title":"RAFT-USENet: A Unified Network for Accurate Axial and Lateral Motion Estimation in Ultrasound Elastography Imaging.","authors":"Sharmin Majumder, Md Tauhidul Islam, Raffaella Righetti","doi":"10.1109/JBHI.2025.3536786","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3536786","url":null,"abstract":"<p><p>High-quality motion estimation is essential in ultrasound elastography (USE) for evaluating tissue mechanical properties and detecting abnormalities. Traditional methods, such as speckle tracking and regularized optimization, face challenges including noise, over-smoothing of displacements, and prolonged runtimes. Recent efforts have explored optical flow-based convolutional neural networks (CNNs). However, current approaches experience at least one of the following limitations: 1) reliance on tissue incompressibility assumption, which compromises data fidelity and can introduce large errors; 2) dependence on ground truth displacement data for supervised CNN methods; 3) use of a regularizer not aligned with tissue physics by relying only on first-order displacement derivatives; 4) use of a L2-norm regularizer that over-smoothes motion estimates; and 5) substantially large sampling size, increasing computational and memory demands, especially for classical methods. In this paper, we develop RAFT-USENet, a physics-informed, unsupervised neural network to estimate both axial and lateral displacements. We design RAFT-USENet by substantially modifying optical flow RAFT network to adapt it to high-frequency USE data. Extensive validation using simulation, phantom and in vivo data demonstrates that RAFT-USENet significantly improves motion estimation performance compared to recent classical and CNN methods. The normalized cross-correlation between pre- and warped post-deformation USE data using RAFT-USENet is estimated as 0.94, 0.88, and 0.82 in simulation, breast phantom and in vivo datasets, respectively, while corresponding comparative methods ranges were found 0.79-0.88, 0.76-0.85, and 0.69-0.81. Additionally, RAFT-USENet reduced computational time by 1.5-150 times compared to existing methods. These results suggest that RAFT-USENet may be a potentially useful reliable and accurate tool for clinical elasticity imaging applications.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MBSCLoc: Multi-label subcellular localization predict based on cluster balanced subspace partitioning method and multi-class contrastive representation learning.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-31 DOI: 10.1109/JBHI.2025.3537284
Bangyi Zhang, Yun Zuo, Zhiqiang Dai, Sifan Zhu, Xuan Liu, Zhaohong Deng

mRNA subcellular localization is a prevalent and essential mechanism that precisely regulates protein translation and significantly impacts various cellular processes. mRNA subcellular localization has advanced the understanding of mRNA function, yet existing methods face limitations, including imbalanced data, suboptimal model performance, and inadequate generalization, particularly in multi-label localization scenarios where solutions are scarce. This study introduces MBSCLoc, a predictor for mRNA multi-label subcellular localization. MBSCLoc predicts mRNA locations across multiple cellular compartments simultaneously, overcoming challenges like single-location prediction, incomplete feature extraction, and imbalanced data. MBSCLoc leverages UTR-LM model for feature extraction, followed by multi-class contrastive representation learning and Clustering Balanced Subspace Partitioning to construct balanced subspaces. It then optimizes sample distribution to tackle severe data imbalance and uses multiple XGBoost classifiers, integrated through voting, to enhance accuracy and generalization. Five-fold cross-validation and independent testing results show that MBSCLoc significantly outperforms other methods. Additionally, MBSCLoc offers superior pixel-level interpretability, strongly supporting mRNA multi-label subcellular localization research. Crucially, the importance of the 5' UTR and 3' UTR regions has been preliminarily confirmed using traditional biological analysis and Tree-SHAP, with most mRNA sequences showing significant relevance in these regions, especially the 3' UTR where about 80% of specific sites reach peak significance. Concurrently, in order to facilitate the use of MBSCLoc by researchers, a freely accessible web has also been developed: http://www.mbscloc.com/.

{"title":"MBSCLoc: Multi-label subcellular localization predict based on cluster balanced subspace partitioning method and multi-class contrastive representation learning.","authors":"Bangyi Zhang, Yun Zuo, Zhiqiang Dai, Sifan Zhu, Xuan Liu, Zhaohong Deng","doi":"10.1109/JBHI.2025.3537284","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3537284","url":null,"abstract":"<p><p>mRNA subcellular localization is a prevalent and essential mechanism that precisely regulates protein translation and significantly impacts various cellular processes. mRNA subcellular localization has advanced the understanding of mRNA function, yet existing methods face limitations, including imbalanced data, suboptimal model performance, and inadequate generalization, particularly in multi-label localization scenarios where solutions are scarce. This study introduces MBSCLoc, a predictor for mRNA multi-label subcellular localization. MBSCLoc predicts mRNA locations across multiple cellular compartments simultaneously, overcoming challenges like single-location prediction, incomplete feature extraction, and imbalanced data. MBSCLoc leverages UTR-LM model for feature extraction, followed by multi-class contrastive representation learning and Clustering Balanced Subspace Partitioning to construct balanced subspaces. It then optimizes sample distribution to tackle severe data imbalance and uses multiple XGBoost classifiers, integrated through voting, to enhance accuracy and generalization. Five-fold cross-validation and independent testing results show that MBSCLoc significantly outperforms other methods. Additionally, MBSCLoc offers superior pixel-level interpretability, strongly supporting mRNA multi-label subcellular localization research. Crucially, the importance of the 5' UTR and 3' UTR regions has been preliminarily confirmed using traditional biological analysis and Tree-SHAP, with most mRNA sequences showing significant relevance in these regions, especially the 3' UTR where about 80% of specific sites reach peak significance. Concurrently, in order to facilitate the use of MBSCLoc by researchers, a freely accessible web has also been developed: http://www.mbscloc.com/.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fall-Risk Monitoring in Diverse Terrains Using Dual-Task Learning and Wearable Sensing System.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-30 DOI: 10.1109/JBHI.2025.3536030
Chih-Lung Lin, Yuan-Hao Ho, Fang-Yi Lin, Pi-Shan Sung, Cheng-Yi Huang

As the elderly population grows, falling accidents become more frequent, and the need for fall-risk monitoring systems increases. Deep learning models for fallrisk movement detection neglect the connections between the terrain and fall-hazard movements. This issue can result in false alarms, particularly when a person encounters changing terrain. This work introduces a novel multisensor system that integrates terrain perception sensors with an inertial measurement unit (IMU) to monitor fall-risk on diverse terrains. Additionally, a dual-task learning (DTL) architecture that is based on a modified CNNLSTM model is implemented; it is used to determine fall-risk level and the terrain from sensor signals. Three fall-risk levels - "normal," "near-fall," and "fall" - are identified as being associated with "flat ground," "stepping up," and "stepping down" terrains. Ten young subjects performed 16 activities on flat and stepping terrains in a laboratory setting, and ten elderly individuals were recruited to perform four activities in the hospital. The accuracies of classification of fall-risk levels and terrains by the proposed system are 97.6% and 95.2%, respectively. The system detects pre-impact fall movements, with a fall prediction accuracy of 97.7% and an average lead time of 329ms for fall trials, revealing the model's effectiveness. The overall monitoring accuracy for elderly individuals is 99.8%, confirming the robustness of the proposed system. This work discusses the impact of sensor type and the model architecture of DTL on the classification of fall-risk levels across various terrains. The results demonstrate that the proposed method is reliable for monitoring the risk of falling.

{"title":"Fall-Risk Monitoring in Diverse Terrains Using Dual-Task Learning and Wearable Sensing System.","authors":"Chih-Lung Lin, Yuan-Hao Ho, Fang-Yi Lin, Pi-Shan Sung, Cheng-Yi Huang","doi":"10.1109/JBHI.2025.3536030","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3536030","url":null,"abstract":"<p><p>As the elderly population grows, falling accidents become more frequent, and the need for fall-risk monitoring systems increases. Deep learning models for fallrisk movement detection neglect the connections between the terrain and fall-hazard movements. This issue can result in false alarms, particularly when a person encounters changing terrain. This work introduces a novel multisensor system that integrates terrain perception sensors with an inertial measurement unit (IMU) to monitor fall-risk on diverse terrains. Additionally, a dual-task learning (DTL) architecture that is based on a modified CNNLSTM model is implemented; it is used to determine fall-risk level and the terrain from sensor signals. Three fall-risk levels - \"normal,\" \"near-fall,\" and \"fall\" - are identified as being associated with \"flat ground,\" \"stepping up,\" and \"stepping down\" terrains. Ten young subjects performed 16 activities on flat and stepping terrains in a laboratory setting, and ten elderly individuals were recruited to perform four activities in the hospital. The accuracies of classification of fall-risk levels and terrains by the proposed system are 97.6% and 95.2%, respectively. The system detects pre-impact fall movements, with a fall prediction accuracy of 97.7% and an average lead time of 329ms for fall trials, revealing the model's effectiveness. The overall monitoring accuracy for elderly individuals is 99.8%, confirming the robustness of the proposed system. This work discusses the impact of sensor type and the model architecture of DTL on the classification of fall-risk levels across various terrains. The results demonstrate that the proposed method is reliable for monitoring the risk of falling.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Biomedical and Health Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1