A transcription factor (TF) is a sequence-specific DNA-binding protein, which plays key roles in cell-fate decision by regulating gene expression. Predicting TFs is key for tea plant research community, as they regulate gene expression, influencing plant growth, development, and stress responses. It is a challenging task through wet lab experimental validation, due to their rarity, as well as the high cost and time requirements. As a result, computational methods are increasingly popular to be chosen. The pre-training strategy has been applied to many tasks in natural language processing (NLP) and has achieved impressive performance. In this paper, we present a novel recognition algorithm named TeaTFactor that utilizes pre-training for the model training of TFs prediction. The model is built upon the BERT architecture, initially pre-trained using protein data from UniProt. Subsequently, the model was fine-tuned using the collected TFs data of tea plants. We evaluated four different word segmentation methods and the existing state-of-the-art prediction tools. According to the comprehensive experimental results and a case study, our model is superior to existing models and achieves the goal of accurate identification. In addition, we have developed a web server at http://teatfactor.tlds.cc, which we believe will facilitate future studies on tea transcription factors and advance the field of crop synthetic biology.
{"title":"TeaTFactor: a prediction tool for tea plant transcription factors based on BERT.","authors":"Qinan Tang, Ying Xiang, Wanling Gao, Liqiang Zhu, Zishu Xu, Yeyun Li, Zhenyu Yue","doi":"10.1109/TCBB.2024.3444466","DOIUrl":"10.1109/TCBB.2024.3444466","url":null,"abstract":"<p><p>A transcription factor (TF) is a sequence-specific DNA-binding protein, which plays key roles in cell-fate decision by regulating gene expression. Predicting TFs is key for tea plant research community, as they regulate gene expression, influencing plant growth, development, and stress responses. It is a challenging task through wet lab experimental validation, due to their rarity, as well as the high cost and time requirements. As a result, computational methods are increasingly popular to be chosen. The pre-training strategy has been applied to many tasks in natural language processing (NLP) and has achieved impressive performance. In this paper, we present a novel recognition algorithm named TeaTFactor that utilizes pre-training for the model training of TFs prediction. The model is built upon the BERT architecture, initially pre-trained using protein data from UniProt. Subsequently, the model was fine-tuned using the collected TFs data of tea plants. We evaluated four different word segmentation methods and the existing state-of-the-art prediction tools. According to the comprehensive experimental results and a case study, our model is superior to existing models and achieves the goal of accurate identification. In addition, we have developed a web server at http://teatfactor.tlds.cc, which we believe will facilitate future studies on tea transcription factors and advance the field of crop synthetic biology.</p>","PeriodicalId":13344,"journal":{"name":"IEEE/ACM Transactions on Computational Biology and Bioinformatics","volume":"PP ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141992290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-13DOI: 10.1109/TCBB.2024.3442536
Jiayi Dong, Jiahao Li, Fei Wang
Understanding the intricate regulatory relationships among genes is crucial for comprehending the development, differentiation, and cellular response in living systems. Consequently, inferring gene regulatory networks (GRNs) based on observed data has gained significant attention as a fundamental goal in biological applications. The proliferation and diversification of available data present both opportunities and challenges in accurately inferring GRNs. Deep learning, a highly successful technique in various domains, holds promise in aiding GRN inference. Several GRN inference methods employing deep learning models have been proposed; however, the selection of an appropriate method remains a challenge for life scientists. In this survey, we provide a comprehensive analysis of 12 GRN inference methods that leverage deep learning models. We trace the evolution of these major methods and categorize them based on the types of applicable data. We delve into the core concepts and specific steps of each method, offering a detailed evaluation of their effectiveness and scalability across different scenarios. These insights enable us to make informed recommendations. Moreover, we explore the challenges faced by GRN inference methods utilizing deep learning and discuss future directions, providing valuable suggestions for the advancement of data scientists in this field.
{"title":"Deep Learning in Gene Regulatory Network Inference: A Survey.","authors":"Jiayi Dong, Jiahao Li, Fei Wang","doi":"10.1109/TCBB.2024.3442536","DOIUrl":"https://doi.org/10.1109/TCBB.2024.3442536","url":null,"abstract":"<p><p>Understanding the intricate regulatory relationships among genes is crucial for comprehending the development, differentiation, and cellular response in living systems. Consequently, inferring gene regulatory networks (GRNs) based on observed data has gained significant attention as a fundamental goal in biological applications. The proliferation and diversification of available data present both opportunities and challenges in accurately inferring GRNs. Deep learning, a highly successful technique in various domains, holds promise in aiding GRN inference. Several GRN inference methods employing deep learning models have been proposed; however, the selection of an appropriate method remains a challenge for life scientists. In this survey, we provide a comprehensive analysis of 12 GRN inference methods that leverage deep learning models. We trace the evolution of these major methods and categorize them based on the types of applicable data. We delve into the core concepts and specific steps of each method, offering a detailed evaluation of their effectiveness and scalability across different scenarios. These insights enable us to make informed recommendations. Moreover, we explore the challenges faced by GRN inference methods utilizing deep learning and discuss future directions, providing valuable suggestions for the advancement of data scientists in this field.</p>","PeriodicalId":13344,"journal":{"name":"IEEE/ACM Transactions on Computational Biology and Bioinformatics","volume":"PP ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141975613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-13DOI: 10.1109/TCBB.2024.3442199
Xinyu He, Yujie Tang, Bo Yu, Shixin Li, Yonggong Ren
Biomedical event detection is a pivotal information extraction task in molecular biology and biomedical research, which provides inspiration for the medical search, disease prevention, and new drug development. The existing methods usually detect simple biomedical events and complex events with the same model, and the performance of the complex biomedical event extraction is relatively low. In this paper, we build different neural networks for simple and complex events respectively, which helps to promote the performance of complex event extraction. To avoid redundant information, we design dynamic path planning strategy for argument detection. To take full use of the information between the trigger identification and argument detection subtasks, and reduce the cascading errors, we build a joint event extraction model. Experimental results demonstrate our approach achieves the best F-score on the biomedical benchmark MLEE dataset and outperforms the recent state-of-the-art methods.
{"title":"Joint Extraction of Biomedical Events Based on Dynamic Path Planning Strategy and Hybrid Neural Network.","authors":"Xinyu He, Yujie Tang, Bo Yu, Shixin Li, Yonggong Ren","doi":"10.1109/TCBB.2024.3442199","DOIUrl":"https://doi.org/10.1109/TCBB.2024.3442199","url":null,"abstract":"<p><p>Biomedical event detection is a pivotal information extraction task in molecular biology and biomedical research, which provides inspiration for the medical search, disease prevention, and new drug development. The existing methods usually detect simple biomedical events and complex events with the same model, and the performance of the complex biomedical event extraction is relatively low. In this paper, we build different neural networks for simple and complex events respectively, which helps to promote the performance of complex event extraction. To avoid redundant information, we design dynamic path planning strategy for argument detection. To take full use of the information between the trigger identification and argument detection subtasks, and reduce the cascading errors, we build a joint event extraction model. Experimental results demonstrate our approach achieves the best F-score on the biomedical benchmark MLEE dataset and outperforms the recent state-of-the-art methods.</p>","PeriodicalId":13344,"journal":{"name":"IEEE/ACM Transactions on Computational Biology and Bioinformatics","volume":"PP ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141975614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-13DOI: 10.1109/TCBB.2024.3442669
Sachin Mathur, Hamid Mattoo, Ziv Bar-Joseph
Time series RNASeq studies can enable understanding of the dynamics of disease progression and treatment response in patients. They also provide information on biomarkers, activated and repressed pathways, and more. While useful, data from multiple patients is challenging to integrate due to the heterogeneity in treatment response among patients, and the small number of timepoints that are usually profiled. Due to the heterogeneity among patients, relying on the sampled time points to integrate data across individuals is challenging and does not lead to correct reconstruction of the response patterns. To address these challenges, we developed a new constrained based pseudotime ordering method for analyzing transcriptomics data in clinical and response studies. Our method allows the assignment of samples to their correct placement on the response curve while respecting the individual patient order. We use polynomials to represent gene expression over the duration of the study and an EM algorithm to determine parameters and locations. Application to three treatment response datasets shows that our method improves on prior methods and leads to accurate orderings that provide new biological insight on the disease and response. Code for the method is available at https://github.com/Sanofi-Public/ RDCS-bulkRNASeq-pseudo ordering.
时间序列 RNASeq 研究有助于了解患者的疾病进展动态和治疗反应。它们还能提供生物标记物、激活和抑制通路等方面的信息。来自多个患者的数据虽然有用,但由于患者之间治疗反应的异质性以及通常分析的时间点数量较少,整合这些数据具有挑战性。由于患者之间存在异质性,依靠采样时间点来整合不同个体的数据具有挑战性,而且无法正确重建反应模式。为了应对这些挑战,我们开发了一种新的基于约束的伪时间排序方法,用于分析临床和反应研究中的转录组学数据。我们的方法允许将样本分配到反应曲线上的正确位置,同时尊重患者的个体排序。我们使用多项式来表示研究期间的基因表达,并使用 EM 算法来确定参数和位置。对三个治疗反应数据集的应用表明,我们的方法改进了之前的方法,并能准确排序,为疾病和反应提供新的生物学见解。该方法的代码见 https://github.com/Sanofi-Public/ RDCS-bulkRNASeq-pseudo ordering。
{"title":"Constrained Pseudo-time Ordering for Clinical Transcriptomics Data.","authors":"Sachin Mathur, Hamid Mattoo, Ziv Bar-Joseph","doi":"10.1109/TCBB.2024.3442669","DOIUrl":"10.1109/TCBB.2024.3442669","url":null,"abstract":"<p><p>Time series RNASeq studies can enable understanding of the dynamics of disease progression and treatment response in patients. They also provide information on biomarkers, activated and repressed pathways, and more. While useful, data from multiple patients is challenging to integrate due to the heterogeneity in treatment response among patients, and the small number of timepoints that are usually profiled. Due to the heterogeneity among patients, relying on the sampled time points to integrate data across individuals is challenging and does not lead to correct reconstruction of the response patterns. To address these challenges, we developed a new constrained based pseudotime ordering method for analyzing transcriptomics data in clinical and response studies. Our method allows the assignment of samples to their correct placement on the response curve while respecting the individual patient order. We use polynomials to represent gene expression over the duration of the study and an EM algorithm to determine parameters and locations. Application to three treatment response datasets shows that our method improves on prior methods and leads to accurate orderings that provide new biological insight on the disease and response. Code for the method is available at https://github.com/Sanofi-Public/ RDCS-bulkRNASeq-pseudo ordering.</p>","PeriodicalId":13344,"journal":{"name":"IEEE/ACM Transactions on Computational Biology and Bioinformatics","volume":"PP ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141975612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-12DOI: 10.1109/TCBB.2024.3440913
Yushan Qiu, Wensheng Chen, Wai-Ki Ching, Hongmin Cai, Hao Jiang, Quan Zou
Increasing evidence has indicated that RNA-binding proteins (RBPs) play an essential role in mediating alternative splicing (AS) events during epithelial-mesenchymal transition (EMT). However, due to the substantial cost and complexity of biological experiments, how AS events are regulated and influenced remains largely unknown. Thus, it is important to construct effective models for inferring hidden RBP-AS event associations during EMT process. In this paper, a novel and efficient model was developed to identify AS event-related candidate RBPs based on Adaptive Graph-based Multi-Label learning (AGML). In particular, we propose to adaptively learn a new affinity graph to capture the intrinsic structure of data for both RBPs and AS events. Multi-view similarity matrices are employed for maintaining the intrinsic structure and guiding the adaptive graph learning. We then simultaneously update the RBP and AS event associations that are predicted from both spaces by applying multi-label learning. The experimental results have shown that our AGML achieved AUC values of 0.9521 and 0.9873 by 5-fold and leave-one-out cross-validations, respectively, indicating the superiority and effectiveness of our proposed model. Furthermore, AGML can serve as an efficient and reliable tool for uncovering novel AS events-associated RBPs and is applicable for predicting the associations between other biological entities. The source code of AGML is available at https://github.com/yushanqiu/AGML.
{"title":"AGML: Adaptive Graph-based Multi-label Learning for Prediction of RBP and AS Event Associations During EMT.","authors":"Yushan Qiu, Wensheng Chen, Wai-Ki Ching, Hongmin Cai, Hao Jiang, Quan Zou","doi":"10.1109/TCBB.2024.3440913","DOIUrl":"https://doi.org/10.1109/TCBB.2024.3440913","url":null,"abstract":"<p><p>Increasing evidence has indicated that RNA-binding proteins (RBPs) play an essential role in mediating alternative splicing (AS) events during epithelial-mesenchymal transition (EMT). However, due to the substantial cost and complexity of biological experiments, how AS events are regulated and influenced remains largely unknown. Thus, it is important to construct effective models for inferring hidden RBP-AS event associations during EMT process. In this paper, a novel and efficient model was developed to identify AS event-related candidate RBPs based on Adaptive Graph-based Multi-Label learning (AGML). In particular, we propose to adaptively learn a new affinity graph to capture the intrinsic structure of data for both RBPs and AS events. Multi-view similarity matrices are employed for maintaining the intrinsic structure and guiding the adaptive graph learning. We then simultaneously update the RBP and AS event associations that are predicted from both spaces by applying multi-label learning. The experimental results have shown that our AGML achieved AUC values of 0.9521 and 0.9873 by 5-fold and leave-one-out cross-validations, respectively, indicating the superiority and effectiveness of our proposed model. Furthermore, AGML can serve as an efficient and reliable tool for uncovering novel AS events-associated RBPs and is applicable for predicting the associations between other biological entities. The source code of AGML is available at https://github.com/yushanqiu/AGML.</p>","PeriodicalId":13344,"journal":{"name":"IEEE/ACM Transactions on Computational Biology and Bioinformatics","volume":"PP ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141971037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-08DOI: 10.1109/TCBB.2024.3371808
Xiaokang Zhou;Carson K. Leung;Kevin I-Kai Wang;Giancarlo Fortino
Deep learning and big data analysis are among the most important research topics in the fields of biomedical applications and digital healthcare. With the fast development of artificial intelligence (AI) and Internets of Things (IoT) technologies, deep learning (DL) for big data analytics—including affective learning, reinforcement learning, and transfer learning—are widely applied to sense, learn, and interact with human health. Examples of biomedical applications include smart biomaterials, biomedical imaging, heartbeat/blood pressure measurement, and eye tracking. These biomedical applications collect healthcare data through remote sensors and transfer the data to a centralized system for analysis. With an enormous amount of historical data, DL and big data analysis technologies are able to identify potential linkage between features and possible risks, raise important decision for medical diagnosis, and provide precious advice for better healthcare treatment and lifestyle. Although significant progress has been made with AI, DL, and big data analytic technologies for medical and healthcare research, there remain gaps between the computer-aided treatment design and real-world healthcare demands. In addition, there are unexplored areas in the fields of healthcare and biomedical applications with cutting-edge AI and DL technologies. Hence, exploring the possibility of DL and big data analytics in the fields of biomedical applications and digital healthcare is in high demand.
{"title":"Editorial Deep Learning-Empowered Big Data Analytics in Biomedical Applications and Digital Healthcare","authors":"Xiaokang Zhou;Carson K. Leung;Kevin I-Kai Wang;Giancarlo Fortino","doi":"10.1109/TCBB.2024.3371808","DOIUrl":"https://doi.org/10.1109/TCBB.2024.3371808","url":null,"abstract":"Deep learning and big data analysis are among the most important research topics in the fields of biomedical applications and digital healthcare. With the fast development of artificial intelligence (AI) and Internets of Things (IoT) technologies, deep learning (DL) for big data analytics—including affective learning, reinforcement learning, and transfer learning—are widely applied to sense, learn, and interact with human health. Examples of biomedical applications include smart biomaterials, biomedical imaging, heartbeat/blood pressure measurement, and eye tracking. These biomedical applications collect healthcare data through remote sensors and transfer the data to a centralized system for analysis. With an enormous amount of historical data, DL and big data analysis technologies are able to identify potential linkage between features and possible risks, raise important decision for medical diagnosis, and provide precious advice for better healthcare treatment and lifestyle. Although significant progress has been made with AI, DL, and big data analytic technologies for medical and healthcare research, there remain gaps between the computer-aided treatment design and real-world healthcare demands. In addition, there are unexplored areas in the fields of healthcare and biomedical applications with cutting-edge AI and DL technologies. Hence, exploring the possibility of DL and big data analytics in the fields of biomedical applications and digital healthcare is in high demand.","PeriodicalId":13344,"journal":{"name":"IEEE/ACM Transactions on Computational Biology and Bioinformatics","volume":"21 4","pages":"516-520"},"PeriodicalIF":3.6,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10631783","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the broad-spectrum and high-efficiency antibacterial activity, antimicrobial peptides (AMPs) and their functions have been studied in the field of drug discovery. Using biological experiments to detect the AMPs and corresponding activities require a high cost, whereas computational technologies do so for much less. Currently, most computational methods solve the identification of AMPs and their activities as two independent tasks, which ignore the relationship between them. Therefore, the combination and sharing of patterns for two tasks is a crucial problem that needs to be addressed. In this study, we propose a deep learning model, called DMAMP, for detecting AMPs and activities simultaneously, which is benefited from multi-task learning. The first stage is to utilize convolutional neural network models and residual blocks to extract the sharing hidden features from two related tasks. The next stage is to use two fully connected layers to learn the distinct information of two tasks. Meanwhile, the original evolutionary features from the peptide sequence are also fed to the predictor of the second task to complement the forgotten information. The experiments on the independent test dataset demonstrate that our method performs better than the single-task model with 4.28% of Matthews Correlation Coefficient (MCC) on the first task, and achieves 0.2627 of an average MCC which is higher than the single-task model and two existing methods for five activities on the second task. To understand whether features derived from the convolutional layers of models capture the differences between target classes, we visualize these high-dimensional features by projecting into 3D space. In addition, we show that our predictor has the ability to identify peptides that achieve activity against Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2). We hope that our proposed method can give new insights into the discovery of novel antiviral peptide drugs.
{"title":"DMAMP: A deep-learning model for detecting antimicrobial peptides and their multi-activities.","authors":"Qiaozhen Meng, Genlang Chen, Shixin Zheng, Yulai Lin, Bin Liu, Jijun Tang, Fei Guo","doi":"10.1109/TCBB.2024.3439541","DOIUrl":"https://doi.org/10.1109/TCBB.2024.3439541","url":null,"abstract":"<p><p>Due to the broad-spectrum and high-efficiency antibacterial activity, antimicrobial peptides (AMPs) and their functions have been studied in the field of drug discovery. Using biological experiments to detect the AMPs and corresponding activities require a high cost, whereas computational technologies do so for much less. Currently, most computational methods solve the identification of AMPs and their activities as two independent tasks, which ignore the relationship between them. Therefore, the combination and sharing of patterns for two tasks is a crucial problem that needs to be addressed. In this study, we propose a deep learning model, called DMAMP, for detecting AMPs and activities simultaneously, which is benefited from multi-task learning. The first stage is to utilize convolutional neural network models and residual blocks to extract the sharing hidden features from two related tasks. The next stage is to use two fully connected layers to learn the distinct information of two tasks. Meanwhile, the original evolutionary features from the peptide sequence are also fed to the predictor of the second task to complement the forgotten information. The experiments on the independent test dataset demonstrate that our method performs better than the single-task model with 4.28% of Matthews Correlation Coefficient (MCC) on the first task, and achieves 0.2627 of an average MCC which is higher than the single-task model and two existing methods for five activities on the second task. To understand whether features derived from the convolutional layers of models capture the differences between target classes, we visualize these high-dimensional features by projecting into 3D space. In addition, we show that our predictor has the ability to identify peptides that achieve activity against Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2). We hope that our proposed method can give new insights into the discovery of novel antiviral peptide drugs.</p>","PeriodicalId":13344,"journal":{"name":"IEEE/ACM Transactions on Computational Biology and Bioinformatics","volume":"PP ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141897345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-31DOI: 10.1109/TCBB.2024.3425644
Xiuhao Fu, Hao Duan, Xiaofeng Zang, Chunling Liu, Xingfeng Li, Qingchen Zhang, Zilong Zhang, Quan Zou, Feifei Cui
Tuberculosis has plagued mankind since ancient times, and the struggle between humans and tuberculosis continues. Mycobacterium tuberculosis is the leading cause of tuberculosis, infecting nearly one-third of the world's population. The rise of peptide drugs has created a new direction in the treatment of tuberculosis. Therefore, for the treatment of tuberculosis, the prediction of anti-tuberculosis peptides is crucial.This paper proposes an anti-tuberculosis peptide prediction method based on hybrid features and stacked ensemble learning. First, a random forest (RF) and extremely randomized tree (ERT) are selected as first-level learning of stacked ensembles. Then, the five best-performing feature encoding methods are selected to obtain the hybrid feature vector, and then the decision tree and recursive feature elimination (DT-RFE) are used to refine the hybrid feature vector. After selection, the optimal feature subset is used as the input of the stacked ensemble model. At the same time, logistic regression (LR) is used as a stacked ensemble secondary learner to build the final stacked ensemble model Hyb_SEnc. The prediction accuracy of Hyb_SEnc achieved 94.68% and 95.74% on the independent test sets of AntiTb_MD and AntiTb_RD, respectively. In addition, we provide a user-friendly Web server (http://www.bioailab. com/Hyb_SEnc). The source code is freely available at https://github.com/fxh1001/Hyb_SEnc.
{"title":"Hyb_SEnc: An Antituberculosis Peptide Predictor Based on a Hybrid Feature Vector and Stacked Ensemble Learning.","authors":"Xiuhao Fu, Hao Duan, Xiaofeng Zang, Chunling Liu, Xingfeng Li, Qingchen Zhang, Zilong Zhang, Quan Zou, Feifei Cui","doi":"10.1109/TCBB.2024.3425644","DOIUrl":"10.1109/TCBB.2024.3425644","url":null,"abstract":"<p><p>Tuberculosis has plagued mankind since ancient times, and the struggle between humans and tuberculosis continues. Mycobacterium tuberculosis is the leading cause of tuberculosis, infecting nearly one-third of the world's population. The rise of peptide drugs has created a new direction in the treatment of tuberculosis. Therefore, for the treatment of tuberculosis, the prediction of anti-tuberculosis peptides is crucial.This paper proposes an anti-tuberculosis peptide prediction method based on hybrid features and stacked ensemble learning. First, a random forest (RF) and extremely randomized tree (ERT) are selected as first-level learning of stacked ensembles. Then, the five best-performing feature encoding methods are selected to obtain the hybrid feature vector, and then the decision tree and recursive feature elimination (DT-RFE) are used to refine the hybrid feature vector. After selection, the optimal feature subset is used as the input of the stacked ensemble model. At the same time, logistic regression (LR) is used as a stacked ensemble secondary learner to build the final stacked ensemble model Hyb_SEnc. The prediction accuracy of Hyb_SEnc achieved 94.68% and 95.74% on the independent test sets of AntiTb_MD and AntiTb_RD, respectively. In addition, we provide a user-friendly Web server (http://www.bioailab. com/Hyb_SEnc). The source code is freely available at https://github.com/fxh1001/Hyb_SEnc.</p>","PeriodicalId":13344,"journal":{"name":"IEEE/ACM Transactions on Computational Biology and Bioinformatics","volume":"PP ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141859585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-29DOI: 10.1109/TCBB.2024.3434992
Xiaoli Lin, Zhuang Yin, Xiaolong Zhang, Jing Hu
Accurate prediction of drug-drug interactions (DDIs) plays an important role in improving the efficiency of drug development and ensuring the safety of combination therapy. Most existing models rely on a single source of information to predict DDIs, and few models can perform tasks on biomedical knowledge graphs. This paper proposes a new hybrid method, namely Knowledge Graph Representation Learning and Feature Fusion (KGRLFF), to fully exploit the information from the biomedical knowledge graph and molecular structure of drugs to better predict DDIs. KGRLFF first uses a Bidirectional Random Walk sampling method based on the PageRank algorithm (BRWP) to obtain higher-order neighborhood information of drugs in the knowledge graph, including neighboring nodes, semantic relations, and higher-order information associated with triple facts. Then, an embedded representation learning model named Knowledge Graph-based Cyclic Recursive Aggregation (KGCRA) is used to learn the embedded representations of drugs by recursively propagating and aggregating messages with drugs as both the source and destination. In addition, the model learns the molecular structures of the drugs to obtain the structured features. Finally, a Feature Representation Fusion Strategy (FRFS) was developed to integrate embedded representations and structured feature representations. Experimental results showed that KGRLFF is feasible for predicting potential DDIs.
{"title":"KGRLFF: Detecting Drug-Drug Interactions Based on Knowledge Graph Representation Learning and Feature Fusion.","authors":"Xiaoli Lin, Zhuang Yin, Xiaolong Zhang, Jing Hu","doi":"10.1109/TCBB.2024.3434992","DOIUrl":"https://doi.org/10.1109/TCBB.2024.3434992","url":null,"abstract":"<p><p>Accurate prediction of drug-drug interactions (DDIs) plays an important role in improving the efficiency of drug development and ensuring the safety of combination therapy. Most existing models rely on a single source of information to predict DDIs, and few models can perform tasks on biomedical knowledge graphs. This paper proposes a new hybrid method, namely Knowledge Graph Representation Learning and Feature Fusion (KGRLFF), to fully exploit the information from the biomedical knowledge graph and molecular structure of drugs to better predict DDIs. KGRLFF first uses a Bidirectional Random Walk sampling method based on the PageRank algorithm (BRWP) to obtain higher-order neighborhood information of drugs in the knowledge graph, including neighboring nodes, semantic relations, and higher-order information associated with triple facts. Then, an embedded representation learning model named Knowledge Graph-based Cyclic Recursive Aggregation (KGCRA) is used to learn the embedded representations of drugs by recursively propagating and aggregating messages with drugs as both the source and destination. In addition, the model learns the molecular structures of the drugs to obtain the structured features. Finally, a Feature Representation Fusion Strategy (FRFS) was developed to integrate embedded representations and structured feature representations. Experimental results showed that KGRLFF is feasible for predicting potential DDIs.</p>","PeriodicalId":13344,"journal":{"name":"IEEE/ACM Transactions on Computational Biology and Bioinformatics","volume":"PP ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141792317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Predicting biomolecular interactions is significant for understanding biological systems. Most existing methods for link prediction are based on graph convolution. Although graph convolution methods are advantageous in extracting structure information of biomolecular interactions, two key challenges still remain. One is how to consider both the immediate and highorder neighbors. Another is how to reduce noise when aggregating high-order neighbors. To address these challenges, we propose a novel method, called mixed high-order graph convolution with filter network via LSTM and channel attention (HGLA), to predict biomolecular interactions. Firstly, the basic and high-order features are extracted respectively through the traditional graph convolutional network (GCN) and the two-layer Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing (MixHop). Secondly, these features are mixed and input into the filter network composed of LayerNorm, SENet and LSTM to generate filtered features, which are concatenated and used for link prediction. The advantages of HGLA are: 1) HGLA processes high-order features separately, rather than simply concatenating them; 2) HGLA better balances the basic features and high-order features; 3) HGLA effectively filters the noise from high-order neighbors. It outperforms state-ofthe-art networks on four benchmark datasets. The codes are available at https://github.com/zznb123/HGLA.
{"title":"HGLA: Biomolecular Interaction Prediction based on Mixed High-Order Graph Convolution with Filter Network via LSTM and Channel Attention.","authors":"Zhen Zhang, Zhaohong Deng, Ruibo Li, Wei Zhang, Qiongdan Lou, Kup-Sze Choi, Shitong Wang","doi":"10.1109/TCBB.2024.3434399","DOIUrl":"10.1109/TCBB.2024.3434399","url":null,"abstract":"<p><p>Predicting biomolecular interactions is significant for understanding biological systems. Most existing methods for link prediction are based on graph convolution. Although graph convolution methods are advantageous in extracting structure information of biomolecular interactions, two key challenges still remain. One is how to consider both the immediate and highorder neighbors. Another is how to reduce noise when aggregating high-order neighbors. To address these challenges, we propose a novel method, called mixed high-order graph convolution with filter network via LSTM and channel attention (HGLA), to predict biomolecular interactions. Firstly, the basic and high-order features are extracted respectively through the traditional graph convolutional network (GCN) and the two-layer Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing (MixHop). Secondly, these features are mixed and input into the filter network composed of LayerNorm, SENet and LSTM to generate filtered features, which are concatenated and used for link prediction. The advantages of HGLA are: 1) HGLA processes high-order features separately, rather than simply concatenating them; 2) HGLA better balances the basic features and high-order features; 3) HGLA effectively filters the noise from high-order neighbors. It outperforms state-ofthe-art networks on four benchmark datasets. The codes are available at https://github.com/zznb123/HGLA.</p>","PeriodicalId":13344,"journal":{"name":"IEEE/ACM Transactions on Computational Biology and Bioinformatics","volume":"PP ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141765973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}