首页 > 最新文献

IEEE Journal of Biomedical and Health Informatics最新文献

英文 中文
Guest Editorial: Deep Medicine and AI for Health
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-10 DOI: 10.1109/JBHI.2024.3523751
Maria Fernanda Cabrera;Tayo Obafemi-Ajayi;Ahmed Metwally;Bobak J Mortazavi
{"title":"Guest Editorial: Deep Medicine and AI for Health","authors":"Maria Fernanda Cabrera;Tayo Obafemi-Ajayi;Ahmed Metwally;Bobak J Mortazavi","doi":"10.1109/JBHI.2024.3523751","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3523751","url":null,"abstract":"","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"29 2","pages":"737-740"},"PeriodicalIF":6.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10879096","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Large Language Models on Biomedical Data Analysis: A Survey. 生物医学数据分析中的大型语言模型:调查。
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-10 DOI: 10.1109/JBHI.2025.3530794
Wei Lan, Zhentao Tang, Mingyang Liu, Qingfeng Chen, Wei Peng, Yiping Phoebe Chen, Yi Pan

With the rapid development of Large Language Model (LLM) technology, it has become an indispensable force in biomedical data analysis research. However, biomedical researchers currently have limited knowledge about LLM. Therefore, there is an urgent need for a summary of LLM applications in biomedical data analysis. Herein, we propose this review by summarizing the latest research work on LLM in biomedicine. In this review, LLM techniques are first outlined. We then discuss biomedical datasets and frameworks for biomedical data analysis, followed by a detailed analysis of LLM applications in genomics, proteomics, transcriptomics, radiomics, single-cell analysis, medical texts and drug discovery. Finally, the challenges of LLM in biomedical data analysis are discussed. In summary, this review is intended for researchers interested in LLM technology and aims to help them understand and apply LLM in biomedical data analysis research.

{"title":"The Large Language Models on Biomedical Data Analysis: A Survey.","authors":"Wei Lan, Zhentao Tang, Mingyang Liu, Qingfeng Chen, Wei Peng, Yiping Phoebe Chen, Yi Pan","doi":"10.1109/JBHI.2025.3530794","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3530794","url":null,"abstract":"<p><p>With the rapid development of Large Language Model (LLM) technology, it has become an indispensable force in biomedical data analysis research. However, biomedical researchers currently have limited knowledge about LLM. Therefore, there is an urgent need for a summary of LLM applications in biomedical data analysis. Herein, we propose this review by summarizing the latest research work on LLM in biomedicine. In this review, LLM techniques are first outlined. We then discuss biomedical datasets and frameworks for biomedical data analysis, followed by a detailed analysis of LLM applications in genomics, proteomics, transcriptomics, radiomics, single-cell analysis, medical texts and drug discovery. Finally, the challenges of LLM in biomedical data analysis are discussed. In summary, this review is intended for researchers interested in LLM technology and aims to help them understand and apply LLM in biomedical data analysis research.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Journal of Biomedical and Health Informatics Information for Authors
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-10 DOI: 10.1109/JBHI.2024.3523743
{"title":"IEEE Journal of Biomedical and Health Informatics Information for Authors","authors":"","doi":"10.1109/JBHI.2024.3523743","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3523743","url":null,"abstract":"","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"29 2","pages":"C3-C3"},"PeriodicalIF":6.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10879095","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MediViSTA: Medical Video Segmentation via Temporal Fusion SAM Adaptation for Echocardiography.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-10 DOI: 10.1109/JBHI.2025.3540306
Sekeun Kim, Pengfei Jin, Cheng Chen, Kyungsang Kim, Zhiliang Lyu, Hui Ren, Sunghwan Kim, Zhengliang Liu, Aoxiao Zhong, Tianming Liu, Xiang Li, Quanzheng Li

Despite achieving impressive results in general-purpose semantic segmentation with strong generalization on natural images, the Segment Anything Model (SAM) has shown less precision and stability in medical image segmentation. In particular, the SAM architecture is designed for 2D natural images and is therefore not support to handle three-dimensional information, which is particularly important for medical imaging modalities that are often volumetric or video data. In this paper, we introduce MediViSTA, a parameter-efficient fine-tuning method designed to adapt the vision foundation model for medical video, with a specific focus on echocardiography segmentation. To achieve spatial adaptation, we propose a frequency feature fusion technique that injects spatial frequency information from a CNN branch. For temporal adaptation, we integrate temporal adapters within the transformer blocks of the image encoder. Using a fine-tuning strategy, only a small subset of pre-trained parameters is updated, allowing efficient adaptation to echocardiography data. The effectiveness of our method has been comprehensively evaluated on three datasets, comprising two public datasets and one multi-center in-house dataset. Our method consistently outperforms various state-of-the-art approaches without using any prompts. Furthermore, our model exhibits strong generalization capabilities on unseen datasets, surpassing the second-best approach by 2.15% in Dice and 0.09 in temporal consistency. The results demonstrate the potential of MediViSTA to significantly advance echocardiography video segmentation, offering improved accuracy and robustness in cardiac assessment applications.

{"title":"MediViSTA: Medical Video Segmentation via Temporal Fusion SAM Adaptation for Echocardiography.","authors":"Sekeun Kim, Pengfei Jin, Cheng Chen, Kyungsang Kim, Zhiliang Lyu, Hui Ren, Sunghwan Kim, Zhengliang Liu, Aoxiao Zhong, Tianming Liu, Xiang Li, Quanzheng Li","doi":"10.1109/JBHI.2025.3540306","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3540306","url":null,"abstract":"<p><p>Despite achieving impressive results in general-purpose semantic segmentation with strong generalization on natural images, the Segment Anything Model (SAM) has shown less precision and stability in medical image segmentation. In particular, the SAM architecture is designed for 2D natural images and is therefore not support to handle three-dimensional information, which is particularly important for medical imaging modalities that are often volumetric or video data. In this paper, we introduce MediViSTA, a parameter-efficient fine-tuning method designed to adapt the vision foundation model for medical video, with a specific focus on echocardiography segmentation. To achieve spatial adaptation, we propose a frequency feature fusion technique that injects spatial frequency information from a CNN branch. For temporal adaptation, we integrate temporal adapters within the transformer blocks of the image encoder. Using a fine-tuning strategy, only a small subset of pre-trained parameters is updated, allowing efficient adaptation to echocardiography data. The effectiveness of our method has been comprehensively evaluated on three datasets, comprising two public datasets and one multi-center in-house dataset. Our method consistently outperforms various state-of-the-art approaches without using any prompts. Furthermore, our model exhibits strong generalization capabilities on unseen datasets, surpassing the second-best approach by 2.15% in Dice and 0.09 in temporal consistency. The results demonstrate the potential of MediViSTA to significantly advance echocardiography video segmentation, offering improved accuracy and robustness in cardiac assessment applications.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sequential sEMG Recognition With Knowledge Transfer and Dynamic Graph Network Based on Spatio-Temporal Feature Extraction Network
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-10 DOI: 10.1109/JBHI.2024.3457026
Zhilin Li;Xianghe Chen;Jie Li;Zhongfei Bai;Hongfei Ji;Lingyu Liu;Lingjing Jin
Surface electromyography (sEMG) signals are electrical signals released by muscles during movement, which can directly reflect the muscle conditions during various actions. When a series of continuous static actions are connected along the temporal axis, a sequential action is formed, which is more aligned with people's intuitive understanding of real-life movements. The signals acquired during sequential actions are known as sequential sEMG signals, including an additional dimension of sequence, embodying richer features compared to static sEMG signals. However, existing methods show inadequate utilization of the signals' sequential characteristics. Addressing these gaps, this paper introduces the Spatio-Temporal Feature Extraction Network (STFEN), which includes a Sequential Feature Analysis Module based on static-sequential knowledge transfer, and a Spatial Feature Analysis Module based on dynamic graph networks to analyze the internal relationships between the leads. The effectiveness of STFEN is tested on both modified publicly available datasets and on our acquired Arabic Digit Sequential Electromyography (ADSE) dataset. The results show that STFEN outperforms existing models in recognizing sequential sEMG signals. Experiments have confirmed the reliability and wide applicability of STFEN in analyzing complex muscle activities. Furthermore, this work also suggests STFEN's potential benefits in rehabilitation medicine, particularly for stroke recovery, and shows promising future applications.
{"title":"Sequential sEMG Recognition With Knowledge Transfer and Dynamic Graph Network Based on Spatio-Temporal Feature Extraction Network","authors":"Zhilin Li;Xianghe Chen;Jie Li;Zhongfei Bai;Hongfei Ji;Lingyu Liu;Lingjing Jin","doi":"10.1109/JBHI.2024.3457026","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3457026","url":null,"abstract":"Surface electromyography (sEMG) signals are electrical signals released by muscles during movement, which can directly reflect the muscle conditions during various actions. When a series of continuous static actions are connected along the temporal axis, a sequential action is formed, which is more aligned with people's intuitive understanding of real-life movements. The signals acquired during sequential actions are known as sequential sEMG signals, including an additional dimension of sequence, embodying richer features compared to static sEMG signals. However, existing methods show inadequate utilization of the signals' sequential characteristics. Addressing these gaps, this paper introduces the Spatio-Temporal Feature Extraction Network (STFEN), which includes a Sequential Feature Analysis Module based on static-sequential knowledge transfer, and a Spatial Feature Analysis Module based on dynamic graph networks to analyze the internal relationships between the leads. The effectiveness of STFEN is tested on both modified publicly available datasets and on our acquired Arabic Digit Sequential Electromyography (ADSE) dataset. The results show that STFEN outperforms existing models in recognizing sequential sEMG signals. Experiments have confirmed the reliability and wide applicability of STFEN in analyzing complex muscle activities. Furthermore, this work also suggests STFEN's potential benefits in rehabilitation medicine, particularly for stroke recovery, and shows promising future applications.","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"29 2","pages":"887-899"},"PeriodicalIF":6.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Dual-Modal Fusion Framework for Detection of Mild Cognitive Impairment Based on Autobiographical Memory.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-07 DOI: 10.1109/JBHI.2025.3540207
Ho-Ling Chang, Thiri Wai, Yu-Shan Liao, Sheng-Ya Lin, Yu-Ling Chang, Li-Chen Fu

This paper introduces a dual-modal early cognitive impairment detection system based on autobiographical memory (AM) tests, and our approach is to automatically extract pre-defined acoustic features and self-designed embeddings to enhance linguistic representation of the spontaneous speech data. By integrating dual-modal data, we effectively enrich the features that aid in model learning, especially addressing the subtle symptoms exhibited by individuals with mild cognitive impairment (MCI), an intermediate stage between healthy individuals and those with Alzheimer's disease (AD). To account for spontaneous speech's unstructured and implicit nature, two additional embeddings, namely, speaker embedding and conversation embedding, are introduced to augment the information available for model learning, thus enriching the feature set for improving the model accuracy. The proposed dual-modal approach is tested on a self-collected Chinese spontaneous speech dataset due to the limited unstructured speech open-access dataset for MCI detection. The system's effectiveness is evaluated through a series of experiments, including ablation studies, to determine the impact of each module on overall performance. The proposed system achieved an average accuracy of 78% in detecting MCI, demonstrating its comparative effectiveness. Enhancements in our system are achieved by integrating a directional encoder tailored to capture temporal information across sequential visits. This addition leads to a 3% increase in detection accuracy within a subset of participants who have undergone multiple AM test sessions. Implementing such a longitudinal approach in analyzing unstructured speech data for MCI detection taps into a relatively underexplored area of research, offering new insights.

本文介绍了一种基于自传体记忆(AM)测试的双模态早期认知障碍检测系统,我们的方法是自动提取预定义的声学特征和自行设计的嵌入,以增强自发语音数据的语言表征。通过整合双模态数据,我们有效地丰富了有助于模型学习的特征,特别是解决了轻度认知障碍(MCI)患者表现出的细微症状,MCI 是介于健康人和阿尔茨海默病(AD)患者之间的一个中间阶段。为了考虑到自发语音的非结构性和隐含性,我们引入了两个额外的嵌入,即说话者嵌入和对话嵌入,以增加模型学习的可用信息,从而丰富特征集,提高模型的准确性。由于用于 MCI 检测的开放式非结构化语音数据集有限,因此在自收集的中文自发语音数据集上对所提出的双模态方法进行了测试。通过一系列实验(包括消融研究)评估了系统的有效性,以确定每个模块对整体性能的影响。所提出的系统在检测 MCI 方面的平均准确率达到 78%,证明了它的比较有效性。我们的系统通过集成一个定向编码器来实现增强,该编码器专门用于捕捉连续访问的时间信息。这一新增功能可将接受过多次 AM 测试的参与者的检测准确率提高 3%。采用这种纵向方法分析非结构化语音数据以进行 MCI 检测,开拓了一个相对欠缺的研究领域,提供了新的见解。
{"title":"A Dual-Modal Fusion Framework for Detection of Mild Cognitive Impairment Based on Autobiographical Memory.","authors":"Ho-Ling Chang, Thiri Wai, Yu-Shan Liao, Sheng-Ya Lin, Yu-Ling Chang, Li-Chen Fu","doi":"10.1109/JBHI.2025.3540207","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3540207","url":null,"abstract":"<p><p>This paper introduces a dual-modal early cognitive impairment detection system based on autobiographical memory (AM) tests, and our approach is to automatically extract pre-defined acoustic features and self-designed embeddings to enhance linguistic representation of the spontaneous speech data. By integrating dual-modal data, we effectively enrich the features that aid in model learning, especially addressing the subtle symptoms exhibited by individuals with mild cognitive impairment (MCI), an intermediate stage between healthy individuals and those with Alzheimer's disease (AD). To account for spontaneous speech's unstructured and implicit nature, two additional embeddings, namely, speaker embedding and conversation embedding, are introduced to augment the information available for model learning, thus enriching the feature set for improving the model accuracy. The proposed dual-modal approach is tested on a self-collected Chinese spontaneous speech dataset due to the limited unstructured speech open-access dataset for MCI detection. The system's effectiveness is evaluated through a series of experiments, including ablation studies, to determine the impact of each module on overall performance. The proposed system achieved an average accuracy of 78% in detecting MCI, demonstrating its comparative effectiveness. Enhancements in our system are achieved by integrating a directional encoder tailored to capture temporal information across sequential visits. This addition leads to a 3% increase in detection accuracy within a subset of participants who have undergone multiple AM test sessions. Implementing such a longitudinal approach in analyzing unstructured speech data for MCI detection taps into a relatively underexplored area of research, offering new insights.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CodePhys: Robust Video-Based Remote Physiological Measurement Through Latent Codebook Querying.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-07 DOI: 10.1109/JBHI.2025.3540134
Shuyang Chu, Menghan Xia, Mengyao Yuan, Xin Liu, Tapio Seppanen, Guoying Zhao, Jingang Shi

Remote photoplethysmography (rPPG) aims to measure non-contact physiological signals from facial videos, which has shown great potential in many applications. Most existing methods directly extract video-based rPPG features by designing neural networks for heart rate estimation. Although they can achieve acceptable results, the recovery of rPPG signal faces intractable challenges when interference from real-world scenarios takes place on facial video. Specifically, facial videos are inevitably affected by non-physiological factors (e.g., camera device noise, defocus, and motion blur), leading to the distortion of extracted rPPG signals. Recent rPPG extraction methods are easily affected by interference and degradation, resulting in noisy rPPG signals. In this paper, we propose a novel method named CodePhys, which innovatively treats rPPG measurement as a code query task in a noise-free proxy space (i.e., codebook) constructed by ground-truth PPG signals. We consider noisy rPPG features as queries and generate high-fidelity rPPG features by matching them with noise-free PPG features from the codebook. Our approach also incorporates a spatial-aware encoder network with a spatial attention mechanism to highlight physiologically active areas and uses a distillation loss to reduce the influence of non-periodic visual interference. Experimental results on four benchmark datasets demonstrate that CodePhys outperforms state-of-the-art methods in both intra-dataset and cross-dataset settings.

{"title":"CodePhys: Robust Video-Based Remote Physiological Measurement Through Latent Codebook Querying.","authors":"Shuyang Chu, Menghan Xia, Mengyao Yuan, Xin Liu, Tapio Seppanen, Guoying Zhao, Jingang Shi","doi":"10.1109/JBHI.2025.3540134","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3540134","url":null,"abstract":"<p><p>Remote photoplethysmography (rPPG) aims to measure non-contact physiological signals from facial videos, which has shown great potential in many applications. Most existing methods directly extract video-based rPPG features by designing neural networks for heart rate estimation. Although they can achieve acceptable results, the recovery of rPPG signal faces intractable challenges when interference from real-world scenarios takes place on facial video. Specifically, facial videos are inevitably affected by non-physiological factors (e.g., camera device noise, defocus, and motion blur), leading to the distortion of extracted rPPG signals. Recent rPPG extraction methods are easily affected by interference and degradation, resulting in noisy rPPG signals. In this paper, we propose a novel method named CodePhys, which innovatively treats rPPG measurement as a code query task in a noise-free proxy space (i.e., codebook) constructed by ground-truth PPG signals. We consider noisy rPPG features as queries and generate high-fidelity rPPG features by matching them with noise-free PPG features from the codebook. Our approach also incorporates a spatial-aware encoder network with a spatial attention mechanism to highlight physiologically active areas and uses a distillation loss to reduce the influence of non-periodic visual interference. Experimental results on four benchmark datasets demonstrate that CodePhys outperforms state-of-the-art methods in both intra-dataset and cross-dataset settings.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trans-MoRFs: A Disordered Protein Predictor Based on the Transformer Architecture.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-07 DOI: 10.1109/JBHI.2025.3539710
Chaolu Meng, Yunyun Shi, Xueliang Fu, Quan Zou, Wu Han

Intrinsically disordered regions (IDRs) of proteins are crucial for a wide range of biological functions, with molecular recognition features (MoRFs) being of particular significance in protein interactions and cellular regulation. However, the identification of MoRFs has been a significant challenge in computational biology owing to their disorder-to-order transition properties. Currently, only a limited number of experimentally validated MoRFs are known, which has prompted the development of computational methods for predicting MoRFs from protein chains. Considering the limitations of existing MoRF predictors regarding prediction accuracy and adaptability to diverse protein sequence lengths, this study introduces Trans-MoRFs, a novel MoRF predictor based on the transformer architecture, for identifying MoRFs within IDRs of proteins. Trans-MoRFs employ the self-attention mechanism of the transformer to efficiently capture the interactions of distant residues in protein sequences. They demonstrate stability and high efficiency in dealing with protein sequences of different lengths and performs well on both short and long sequences. On multiple benchmark datasets, the model attained a mean area under the curve score of 0.94, which is higher than those of all existing models, and significantly outperformed existing combined and single MoRF prediction tools on multiple performance metrics. Trans-MoRFs have excellent accuracy and a wide range of applications for predicting MoRFs and other functionally important fragments in the disordered regions of proteins. They offer significant assistance in comprehending protein functions, precisely pinpointing functional segments within disordered protein regions and facilitating the discovery of novel drug targets. We also offer a web server for related research: http://112.124.26.17:8007.

{"title":"Trans-MoRFs: A Disordered Protein Predictor Based on the Transformer Architecture.","authors":"Chaolu Meng, Yunyun Shi, Xueliang Fu, Quan Zou, Wu Han","doi":"10.1109/JBHI.2025.3539710","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3539710","url":null,"abstract":"<p><p>Intrinsically disordered regions (IDRs) of proteins are crucial for a wide range of biological functions, with molecular recognition features (MoRFs) being of particular significance in protein interactions and cellular regulation. However, the identification of MoRFs has been a significant challenge in computational biology owing to their disorder-to-order transition properties. Currently, only a limited number of experimentally validated MoRFs are known, which has prompted the development of computational methods for predicting MoRFs from protein chains. Considering the limitations of existing MoRF predictors regarding prediction accuracy and adaptability to diverse protein sequence lengths, this study introduces Trans-MoRFs, a novel MoRF predictor based on the transformer architecture, for identifying MoRFs within IDRs of proteins. Trans-MoRFs employ the self-attention mechanism of the transformer to efficiently capture the interactions of distant residues in protein sequences. They demonstrate stability and high efficiency in dealing with protein sequences of different lengths and performs well on both short and long sequences. On multiple benchmark datasets, the model attained a mean area under the curve score of 0.94, which is higher than those of all existing models, and significantly outperformed existing combined and single MoRF prediction tools on multiple performance metrics. Trans-MoRFs have excellent accuracy and a wide range of applications for predicting MoRFs and other functionally important fragments in the disordered regions of proteins. They offer significant assistance in comprehending protein functions, precisely pinpointing functional segments within disordered protein regions and facilitating the discovery of novel drug targets. We also offer a web server for related research: http://112.124.26.17:8007.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-objective comprehensive framework for predicting protein-peptide interactions and binding residues.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-06 DOI: 10.1109/JBHI.2025.3539313
Ruheng Wang, Xuetong Yang, Chao Pang, Leyi Wei

The identification of protein-peptide interacting pairs and their corresponding binding residues is fundamentally crucial and can greatly facilitate the peptide therapeutics designing and understanding the mechanisms of protein functions. Recently, several computational approaches have been proposed to solve the protein-peptide interaction prediction problem. However, most existing prediction methods cannot directly predict the protein-peptide interacting pairs as well as the binding residues from protein and peptide sequences simultaneously. Here, we developed a Comprehensive Protein-Peptide Interaction prediction Framework (CPPIF), to predict both binary protein-peptide interaction and their binding residues. We also constructed a benchmark dataset containing more than 8,900 protein-peptide interacting pairs with non-covalent interactions and their corresponding binding residues to systematically evaluate the performances of existing models. Comprehensive evaluation on the benchmark datasets demonstrated that CPPIF can successfully predict the non-covalent protein-peptide interactions that cannot be effectively captured by previous prediction methods. Moreover, CPPIF outperformed other state-of-the-art methods in predicting binding residues in the peptides and achieved good performance in the identification of important binding residues in the proteins.

{"title":"A multi-objective comprehensive framework for predicting protein-peptide interactions and binding residues.","authors":"Ruheng Wang, Xuetong Yang, Chao Pang, Leyi Wei","doi":"10.1109/JBHI.2025.3539313","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3539313","url":null,"abstract":"<p><p>The identification of protein-peptide interacting pairs and their corresponding binding residues is fundamentally crucial and can greatly facilitate the peptide therapeutics designing and understanding the mechanisms of protein functions. Recently, several computational approaches have been proposed to solve the protein-peptide interaction prediction problem. However, most existing prediction methods cannot directly predict the protein-peptide interacting pairs as well as the binding residues from protein and peptide sequences simultaneously. Here, we developed a Comprehensive Protein-Peptide Interaction prediction Framework (CPPIF), to predict both binary protein-peptide interaction and their binding residues. We also constructed a benchmark dataset containing more than 8,900 protein-peptide interacting pairs with non-covalent interactions and their corresponding binding residues to systematically evaluate the performances of existing models. Comprehensive evaluation on the benchmark datasets demonstrated that CPPIF can successfully predict the non-covalent protein-peptide interactions that cannot be effectively captured by previous prediction methods. Moreover, CPPIF outperformed other state-of-the-art methods in predicting binding residues in the peptides and achieved good performance in the identification of important binding residues in the proteins.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Completed Feature Disentanglement Learning for Multimodal MRIs Analysis.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-06 DOI: 10.1109/JBHI.2025.3539712
Tianling Liu, Hongying Liu, Fanhua Shang, Lequan Yu, Tong Han, Liang Wan

Multimodal MRIs play a crucial role in clinical diagnosis and treatment. Feature disentanglement (FD)-based methods, aiming at learning superior feature representations for multimodal data analysis, have achieved significant success in multimodal learning (MML). Typically, existing FD-based methods separate multimodal data into modality-shared and modality-specific features, and employ concatenation or attention mechanisms to integrate these features. However, our preliminary experiments indicate that these methods could lead to a loss of shared information among subsets of modalities when the inputs contain more than two modalities, and such information is critical for prediction accuracy. Furthermore, these methods do not adequately interpret the relationships between the decoupled features at the fusion stage. To address these limitations, we propose a novel Complete Feature Disentanglement (CFD) strategy that recovers the lost information during feature decoupling. Specifically, the CFD strategy not only identifies modality-shared and modality-specific features, but also decouples shared features among subsets of multimodal inputs, termed as modality-partial-shared features. We further introduce a new Dynamic Mixture-of-Experts Fusion (DMF) module that dynamically integrates these decoupled features, by explicitly learning the local-global relationships among the features. The effectiveness of our approach is validated through classification tasks on three multimodal MRI datasets. Extensive experimental results demonstrate that our approach outperforms other state-of-the-art MML methods with obvious margins, showcasing its superior performance.

{"title":"Completed Feature Disentanglement Learning for Multimodal MRIs Analysis.","authors":"Tianling Liu, Hongying Liu, Fanhua Shang, Lequan Yu, Tong Han, Liang Wan","doi":"10.1109/JBHI.2025.3539712","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3539712","url":null,"abstract":"<p><p>Multimodal MRIs play a crucial role in clinical diagnosis and treatment. Feature disentanglement (FD)-based methods, aiming at learning superior feature representations for multimodal data analysis, have achieved significant success in multimodal learning (MML). Typically, existing FD-based methods separate multimodal data into modality-shared and modality-specific features, and employ concatenation or attention mechanisms to integrate these features. However, our preliminary experiments indicate that these methods could lead to a loss of shared information among subsets of modalities when the inputs contain more than two modalities, and such information is critical for prediction accuracy. Furthermore, these methods do not adequately interpret the relationships between the decoupled features at the fusion stage. To address these limitations, we propose a novel Complete Feature Disentanglement (CFD) strategy that recovers the lost information during feature decoupling. Specifically, the CFD strategy not only identifies modality-shared and modality-specific features, but also decouples shared features among subsets of multimodal inputs, termed as modality-partial-shared features. We further introduce a new Dynamic Mixture-of-Experts Fusion (DMF) module that dynamically integrates these decoupled features, by explicitly learning the local-global relationships among the features. The effectiveness of our approach is validated through classification tasks on three multimodal MRI datasets. Extensive experimental results demonstrate that our approach outperforms other state-of-the-art MML methods with obvious margins, showcasing its superior performance.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Biomedical and Health Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1