Pub Date : 2025-02-12DOI: 10.1016/j.jbi.2025.104786
Çerağ Oğuztüzün , Zhenxiang Gao , Hui Li , Rong Xu
Objective:
Drug repurposing accelerates therapeutic development by finding new indications for approved drugs. However, accounting for individual patient differences is challenging. This study introduces a Precision Drug Repurposing (PDR) framework at single-patient resolution, integrating individual-level data with a foundational biomedical knowledge graph to enable personalized drug discovery.
Methods:
We developed a framework integrating patient-specific data from the UK Biobank (Polygenic Risk Scores, biomarker expressions, and medical history) with a comprehensive biomedical knowledge graph (61,146 entities, 1,246,726 relations). Using Alzheimer’s Disease as a case study, we compared three diverse patient-specific models with a foundational model through standard link prediction metrics. We evaluated top predicted candidate drugs using patient medication history and literature review.
Results:
Our framework maintained the robust prediction capabilities of the foundational model. The integration of patient data, particularly Polygenic Risk Scores (PRS), significantly influenced drug prioritization (Cohen’s d = 1.05 for scoring differences). Ablation studies demonstrated PRS’s crucial role, with effect size decreasing to 0.77 upon removal. Each patient model identified novel drug candidates that were missed by the foundational model but showed therapeutic relevance when evaluated using patient’s own medication history. These candidates were further supported by aligned literature evidence with the patient-level genetic risk profiles based on PRS.
Conclusion:
This exploratory study demonstrates a promising approach to precision drug repurposing by integrating patient-specific data with a foundational knowledge graph.
{"title":"Precision Drug Repurposing (PDR): Patient-level modeling and prediction combining foundational knowledge graph with biobank data","authors":"Çerağ Oğuztüzün , Zhenxiang Gao , Hui Li , Rong Xu","doi":"10.1016/j.jbi.2025.104786","DOIUrl":"10.1016/j.jbi.2025.104786","url":null,"abstract":"<div><h3>Objective:</h3><div>Drug repurposing accelerates therapeutic development by finding new indications for approved drugs. However, accounting for individual patient differences is challenging. This study introduces a Precision Drug Repurposing (PDR) framework at single-patient resolution, integrating individual-level data with a foundational biomedical knowledge graph to enable personalized drug discovery.</div></div><div><h3>Methods:</h3><div>We developed a framework integrating patient-specific data from the UK Biobank (Polygenic Risk Scores, biomarker expressions, and medical history) with a comprehensive biomedical knowledge graph (61,146 entities, 1,246,726 relations). Using Alzheimer’s Disease as a case study, we compared three diverse patient-specific models with a foundational model through standard link prediction metrics. We evaluated top predicted candidate drugs using patient medication history and literature review.</div></div><div><h3>Results:</h3><div>Our framework maintained the robust prediction capabilities of the foundational model. The integration of patient data, particularly Polygenic Risk Scores (PRS), significantly influenced drug prioritization (Cohen’s d = 1.05 for scoring differences). Ablation studies demonstrated PRS’s crucial role, with effect size decreasing to 0.77 upon removal. Each patient model identified novel drug candidates that were missed by the foundational model but showed therapeutic relevance when evaluated using patient’s own medication history. These candidates were further supported by aligned literature evidence with the patient-level genetic risk profiles based on PRS.</div></div><div><h3>Conclusion:</h3><div>This exploratory study demonstrates a promising approach to precision drug repurposing by integrating patient-specific data with a foundational knowledge graph.</div></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"163 ","pages":"Article 104786"},"PeriodicalIF":4.0,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143403492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-07DOI: 10.1016/j.jbi.2025.104789
Yiming Li , Deepthi Viswaroopan , William He , Jianfu Li , Xu Zuo , Hua Xu , Cui Tao
Objective
Adverse event (AE) extraction following COVID-19 vaccines from text data is crucial for monitoring and analyzing the safety profiles of immunizations, identifying potential risks and ensuring the safe use of these products. Traditional deep learning models are adept at learning intricate feature representations and dependencies in sequential data, but often require extensive labeled data. In contrast, large language models (LLMs) excel in understanding contextual information, but exhibit unstable performance on named entity recognition (NER) tasks, possibly due to their broad but unspecific training. This study aims to evaluate the effectiveness of LLMs and traditional deep learning models in AE extraction, and to assess the impact of ensembling these models on performance.
Methods
In this study, we utilized reports and posts from the Vaccine Adverse Event Reporting System (VAERS) (n = 230), Twitter (n = 3,383), and Reddit (n = 49) as our corpora. Our goal was to extract three types of entities: vaccine, shot, and adverse event (ae). We explored and fine-tuned (except GPT-4) multiple LLMs, including GPT-2, GPT-3.5, GPT-4, Llama-2 7b, and Llama-2 13b, as well as traditional deep learning models like Recurrent neural network (RNN) and Bidirectional Encoder Representations from Transformers for Biomedical Text Mining (BioBERT). To enhance performance, we created ensembles of the three models with the best performance. For evaluation, we used strict and relaxed F1 scores to evaluate the performance for each entity type, and micro-average F1 was used to assess the overall performance.
Results
The ensemble demonstrated the best performance in identifying the entities “vaccine,” “shot,” and “ae,” achieving strict F1-scores of 0.878, 0.930, and 0.925, respectively, and a micro-average score of 0.903. These results underscore the significance of fine-tuning models for specific tasks and demonstrate the effectiveness of ensemble methods in enhancing performance.
Conclusion
In conclusion, this study demonstrates the effectiveness and robustness of ensembling fine-tuned traditional deep learning models and LLMs, for extracting AE-related information following COVID-19 vaccination. This study contributes to the advancement of natural language processing in the biomedical domain, providing valuable insights into improving AE extraction from text data for pharmacovigilance and public health surveillance.
{"title":"Improving entity recognition using ensembles of deep learning and fine-tuned large language models: A case study on adverse event extraction from VAERS and social media","authors":"Yiming Li , Deepthi Viswaroopan , William He , Jianfu Li , Xu Zuo , Hua Xu , Cui Tao","doi":"10.1016/j.jbi.2025.104789","DOIUrl":"10.1016/j.jbi.2025.104789","url":null,"abstract":"<div><h3>Objective</h3><div>Adverse event (AE) extraction following COVID-19 vaccines from text data is crucial for monitoring and analyzing the safety profiles of immunizations, identifying potential risks and ensuring the safe use of these products. Traditional deep learning models are adept at learning intricate feature representations and dependencies in sequential data, but often require extensive labeled data. In contrast, large language models (LLMs) excel in understanding contextual information, but exhibit unstable performance on named entity recognition (NER) tasks, possibly due to their broad but unspecific training. This study aims to evaluate the effectiveness of LLMs and traditional deep learning models in AE extraction, and to assess the impact of ensembling these models on performance.</div></div><div><h3>Methods</h3><div>In this study, we utilized reports and posts from the Vaccine Adverse Event Reporting System (VAERS) (n = 230), Twitter (n = 3,383), and Reddit (n = 49) as our corpora. Our goal was to extract three types of entities: vaccine, shot, and adverse event (ae). We explored and fine-tuned (except GPT-4) multiple LLMs, including GPT-2, GPT-3.5, GPT-4, Llama-2 7b, and Llama-2 13b, as well as traditional deep learning models like Recurrent neural network (RNN) and Bidirectional Encoder Representations from Transformers for Biomedical Text Mining (BioBERT). To enhance performance, we created ensembles of the three models with the best performance. For evaluation, we used strict and relaxed F1 scores to evaluate the performance for each entity type, and micro-average F1 was used to assess the overall performance.</div></div><div><h3>Results</h3><div>The ensemble demonstrated the best performance in identifying the entities “vaccine,” “shot,” and “ae,” achieving strict F1-scores of 0.878, 0.930, and 0.925, respectively, and a micro-average score of 0.903. These results underscore the significance of fine-tuning models for specific tasks and demonstrate the effectiveness of ensemble methods in enhancing performance.</div></div><div><h3>Conclusion</h3><div>In conclusion, this study demonstrates the effectiveness and robustness of ensembling fine-tuned traditional deep learning models and LLMs, for extracting AE-related information following COVID-19 vaccination. This study contributes to the advancement of natural language processing in the biomedical domain, providing valuable insights into improving AE extraction from text data for pharmacovigilance and public health surveillance.</div></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"163 ","pages":"Article 104789"},"PeriodicalIF":4.0,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143382606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-06DOI: 10.1016/j.jbi.2025.104785
Haoqin Yang , Yuandong Liu , Longbo Zhang , Hongzhen Cai , Kai Che , Linlin Xing
Medication recommendations are designed to provide physicians and patients with personalized, accurate and safe medication choices to maximize patient outcomes. Although significant progress has been made in related research, three major challenges remain: inadequate modeling of patients’ multidimensional and time-series information, insufficient representation of medication substructures, and poor balance between model accuracy and drug-drug interactions. To address these issues , a safe medication recommendation model SDRBT based on patient deep spatio-temporal encoding and medication substructure mapping is proposed in this paper. SDRBT has developed a patient deep temporal and spatial coding module, which combines symptom information, disease diagnosis information, and treatment information from the patient’s electronic health record data. It innovatively utilizes the Block Recurrent Transformer to model longitudinal temporal information of patients in different dimensions to obtain the horizontal representation of the patient’s current visit. A dual-domain mapping module for medication substructures is designed to perform global and local mapping of medications, fully learning and aggregating medication substructure representations. Finally, a PID LOSS control unit was designed, in which we studied a drug interaction control module based on the similarity calculation between the electronic health map and the drug interaction graph. This module ensures the safety of the recommended medication combination effectively improved the recommendation efficiency and reduced the model training time. Experiments on the public MIMIC-III dataset demonstrate SDRBT’s superior accuracy in medication recommendation.
{"title":"Patient deep spatio-temporal encoding and medication substructure mapping for safe medication recommendation","authors":"Haoqin Yang , Yuandong Liu , Longbo Zhang , Hongzhen Cai , Kai Che , Linlin Xing","doi":"10.1016/j.jbi.2025.104785","DOIUrl":"10.1016/j.jbi.2025.104785","url":null,"abstract":"<div><div>Medication recommendations are designed to provide physicians and patients with personalized, accurate and safe medication choices to maximize patient outcomes. Although significant progress has been made in related research, three major challenges remain: inadequate modeling of patients’ multidimensional and time-series information, insufficient representation of medication substructures, and poor balance between model accuracy and drug-drug interactions. To address these issues , a safe medication recommendation model SDRBT based on patient deep spatio-temporal encoding and medication substructure mapping is proposed in this paper. SDRBT has developed a patient deep temporal and spatial coding module, which combines symptom information, disease diagnosis information, and treatment information from the patient’s electronic health record data. It innovatively utilizes the Block Recurrent Transformer to model longitudinal temporal information of patients in different dimensions to obtain the horizontal representation of the patient’s current visit. A dual-domain mapping module for medication substructures is designed to perform global and local mapping of medications, fully learning and aggregating medication substructure representations. Finally, a PID LOSS control unit was designed, in which we studied a drug interaction control module based on the similarity calculation between the electronic health map and the drug interaction graph. This module ensures the safety of the recommended medication combination effectively improved the recommendation efficiency and reduced the model training time. Experiments on the public MIMIC-III dataset demonstrate SDRBT’s superior accuracy in medication recommendation.</div></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"163 ","pages":"Article 104785"},"PeriodicalIF":4.0,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143349791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-03DOI: 10.1016/j.jbi.2025.104784
Shuai Liu , Xiao Yan , Xiao Guo , Shun Qi , Huaning Wang , Xiangyu Chang
Objective:
Identifying functional connectivity biomarkers of major depressive disorder (MDD) patients is essential to advance the understanding of disorder mechanisms and early intervention. Multi-site data arise naturally which could enhance the statistical power of single-site-based methods. However, the main concern is the inter-site heterogeneity and data sharing barriers between different sites. Our objective is to overcome these barriers to learn multiple Bayesian networks (BNs) from rs-fMRI data.
Methods:
We propose a federated joint estimator and the corresponding optimization algorithm, called NOTEARS-PFL. Specifically, we incorporate both shared and site-specific information into NOTEARS-PFL by utilizing the sparse group lasso penalty. Addressing data-sharing constraint, we develop the alternating direction method of multipliers for the optimization of NOTEARS-PFL. This entails processing neuroimaging data locally at each site, followed by the transmission of the learned network structures for central global updates.
Results:
The effectiveness and accuracy of the NOTEARS-PFL method are validated through its application on both synthetic and real-world multi-site resting-state functional magnetic resonance imaging (rs-fMRI) datasets. This demonstrates its superior efficiency and precision in comparison to alternative approaches.
Conclusion:
We proposed a toolbox called NOTEARS-PFL to learn the heterogeneous brain functional connectivity in MDD patients using multi-site data efficiently and with the data sharing constraint. The comprehensive experiments on both synthetic data and real-world multi-site rs-fMRI datasets with MDD highlight the excellent efficacy of our proposed method.
{"title":"Federated Bayesian network learning from multi-site data","authors":"Shuai Liu , Xiao Yan , Xiao Guo , Shun Qi , Huaning Wang , Xiangyu Chang","doi":"10.1016/j.jbi.2025.104784","DOIUrl":"10.1016/j.jbi.2025.104784","url":null,"abstract":"<div><h3>Objective:</h3><div>Identifying functional connectivity biomarkers of major depressive disorder (MDD) patients is essential to advance the understanding of disorder mechanisms and early intervention. Multi-site data arise naturally which could enhance the statistical power of single-site-based methods. However, the main concern is the inter-site heterogeneity and data sharing barriers between different sites. Our objective is to overcome these barriers to learn multiple Bayesian networks (BNs) from rs-fMRI data.</div></div><div><h3>Methods:</h3><div>We propose a federated joint estimator and the corresponding optimization algorithm, called NOTEARS-PFL. Specifically, we incorporate both shared and site-specific information into NOTEARS-PFL by utilizing the sparse group lasso penalty. Addressing data-sharing constraint, we develop the alternating direction method of multipliers for the optimization of NOTEARS-PFL. This entails processing neuroimaging data locally at each site, followed by the transmission of the learned network structures for central global updates.</div></div><div><h3>Results:</h3><div>The effectiveness and accuracy of the NOTEARS-PFL method are validated through its application on both synthetic and real-world multi-site resting-state functional magnetic resonance imaging (rs-fMRI) datasets. This demonstrates its superior efficiency and precision in comparison to alternative approaches.</div></div><div><h3>Conclusion:</h3><div>We proposed a toolbox called NOTEARS-PFL to learn the heterogeneous brain functional connectivity in MDD patients using multi-site data efficiently and with the data sharing constraint. The comprehensive experiments on both synthetic data and real-world multi-site rs-fMRI datasets with MDD highlight the excellent efficacy of our proposed method.</div></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"163 ","pages":"Article 104784"},"PeriodicalIF":4.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143255682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-02DOI: 10.1016/j.jbi.2025.104787
Naimin Jing , Yiwen Lu , Jiayi Tong , James Weaver , Patrick Ryan , Hua Xu , Yong Chen
Objectives
Binary outcomes in electronic health records (EHR) derived using automated phenotype algorithms may suffer from phenotyping error, resulting in bias in association estimation. Huang et al. [1] proposed the Prior Knowledge-Guided Integrated Likelihood Estimation (PIE) method to mitigate the estimation bias, however, their investigation focused on point estimation without statistical inference, and the evaluation of PIE therein using simulation was a proof-of-concept with only a limited scope of scenarios. This study aims to comprehensively assess PIE’s performance including (1) how well PIE performs under a wide spectrum of operating characteristics of phenotyping algorithms under real-world scenarios (e. g., low prevalence, low sensitivity, high specificity); (2) beyond point estimation, how much variation of the PIE estimator was introduced by the prior distribution; and (3) from a hypothesis testing point of view, if PIE improves type I error and statistical power relative to the naïve method (i.e., ignoring the phenotyping error).
Methods
Synthetic data and use-case analysis were utilized to evaluate PIE. The synthetic data were generated under diverse outcome prevalence, phenotyping algorithm sensitivity, and association effect sizes. Simulation studies compared PIE under different prior distributions with the naïve method, assessing bias, variance, type I error, and power. Use-case analysis compared the performance of PIE and the naïve method in estimating the association of multiple predictors with COVID-19 infection.
Results
PIE exhibited reduced bias compared to the naïve method across varied simulation settings, with comparable type I error and power. As the effect size became larger, the bias reduced by PIE was larger. PIE has superior performance when prior distributions aligned closely with true phenotyping algorithm characteristics. Impact of prior quality was minor for low-prevalence outcomes but large for common outcomes. In use-case analysis, PIE maintains a relatively accurate estimation across different scenarios, particularly outperforming the naïve approach under large effect sizes.
Conclusion
PIE effectively mitigates estimation bias in a wide spectrum of real-world settings, particularly with accurate prior information. Its main benefit lies in bias reduction rather than hypothesis testing. The impact of the prior is small for low-prevalence outcomes.
{"title":"Evaluating the Bias, type I error and statistical power of the prior Knowledge-Guided integrated likelihood estimation (PIE) for bias reduction in EHR based association studies","authors":"Naimin Jing , Yiwen Lu , Jiayi Tong , James Weaver , Patrick Ryan , Hua Xu , Yong Chen","doi":"10.1016/j.jbi.2025.104787","DOIUrl":"10.1016/j.jbi.2025.104787","url":null,"abstract":"<div><h3>Objectives</h3><div>Binary outcomes in electronic health records (EHR) derived using automated phenotype algorithms may suffer from phenotyping error, resulting in bias in association estimation. Huang et al. <span><span>[1]</span></span> proposed the Prior Knowledge-Guided Integrated Likelihood Estimation (PIE) method to mitigate the estimation bias, however, their investigation focused on point estimation without statistical inference, and the evaluation of PIE therein using simulation was a proof-of-concept with only a limited scope of scenarios. This study aims to comprehensively assess PIE’s performance including (1) how well PIE performs under a wide spectrum of operating characteristics of phenotyping algorithms under real-world scenarios (e. g., low prevalence, low sensitivity, high specificity); (2) beyond point estimation, how much variation of the PIE estimator was introduced by the prior distribution; and (3) from a hypothesis testing point of view, if PIE improves type I error and statistical power relative to the naïve method (i.e., ignoring the phenotyping error).</div></div><div><h3>Methods</h3><div>Synthetic data and use-case analysis were utilized to evaluate PIE. The synthetic data were generated under diverse outcome prevalence, phenotyping algorithm sensitivity, and association effect sizes. Simulation studies compared PIE under different prior distributions with the naïve method, assessing bias, variance, type I error, and power. Use-case analysis compared the performance of PIE and the naïve method in estimating the association of multiple predictors with COVID-19 infection.</div></div><div><h3>Results</h3><div>PIE exhibited reduced bias compared to the naïve method across varied simulation settings, with comparable type I error and power. As the effect size became larger, the bias reduced by PIE was larger. PIE has superior performance when prior distributions aligned closely with true phenotyping algorithm characteristics. Impact of prior quality was minor for low-prevalence outcomes but large for common outcomes. In use-case analysis, PIE maintains a relatively accurate estimation across different scenarios, particularly outperforming the naïve approach under large effect sizes.</div></div><div><h3>Conclusion</h3><div>PIE effectively mitigates estimation bias in a wide spectrum of real-world settings, particularly with accurate prior information. Its main benefit lies in bias reduction rather than hypothesis testing. The impact of the prior is small for low-prevalence outcomes.</div></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"163 ","pages":"Article 104787"},"PeriodicalIF":4.0,"publicationDate":"2025-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143189348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.jbi.2025.104772
Shiwei Gao, Jingjing Xie, Yizhao Zhao
Background
In the medical context where polypharmacy is increasingly common, accurately predicting drug-drug interactions (DDIs) is necessary for enhancing clinical medication safety and personalized treatment. Despite progress in identifying potential DDIs, a deep understanding of the underlying mechanisms of DDIs remains limited, constraining the rapid development and clinical application of new drugs.
Methods
This study introduces a novel multimodal drug-drug interaction (MMDDI) model based on multi-source drug data and comprehensive feature fusion techniques, aiming to improve the accuracy and depth of DDI prediction. We utilized the real-world DrugBank dataset, which contains rich drug information. Our task was to predict multiple interaction events between drug pairs and analyze the underlying mechanisms of these interactions. The MMDDI model achieves precise predictions through four key stages: feature extraction, drug pairing strategy, fusion network, and multi-source feature integration. We employed advanced data fusion techniques and machine learning algorithms for multidimensional analysis of drug features and interaction events.
Results
The MMDDI model was comprehensively evaluated on three representative prediction tasks. Experimental results demonstrated that the MMDDI model outperforms existing technologies in terms of predictive accuracy, generalization ability, and interpretability. Specifically, the MMDDI model achieved an accuracy of 93% on the test set, and the area under the AUC-ROC curve reached 0.9505, showing excellent predictive performance. Furthermore, the model’s interpretability analysis revealed the complex relationships between drug features and interaction mechanisms, providing new insights for clinical medication decisions.
Conclusion
The MMDDI model not only improves the accuracy of DDI prediction but also provides significant scientific support for clinical medication safety and drug development by deeply analyzing the mechanisms of drug interactions. These findings have the potential to improve patient medication outcomes and contribute to the development of personalized medicine.
{"title":"A Multi-Source drug combination and Omnidirectional feature fusion approach for predicting Drug-Drug interaction events","authors":"Shiwei Gao, Jingjing Xie, Yizhao Zhao","doi":"10.1016/j.jbi.2025.104772","DOIUrl":"10.1016/j.jbi.2025.104772","url":null,"abstract":"<div><h3>Background</h3><div>In the medical context where polypharmacy is increasingly common, accurately predicting drug-drug interactions (DDIs) is necessary for enhancing clinical medication safety and personalized treatment. Despite progress in identifying potential DDIs, a deep understanding of the underlying mechanisms of DDIs remains limited, constraining the rapid development and clinical application of new drugs.</div></div><div><h3>Methods</h3><div>This study introduces a novel multimodal drug-drug interaction (MMDDI) model based on multi-source drug data and comprehensive feature fusion techniques, aiming to improve the accuracy and depth of DDI prediction. We utilized the real-world DrugBank dataset, which contains rich drug information. Our task was to predict multiple interaction events between drug pairs and analyze the underlying mechanisms of these interactions. The MMDDI model achieves precise predictions through four key stages: feature extraction, drug pairing strategy, fusion network, and multi-source feature integration. We employed advanced data fusion techniques and machine learning algorithms for multidimensional analysis of drug features and interaction events.</div></div><div><h3>Results</h3><div>The MMDDI model was comprehensively evaluated on three representative prediction tasks. Experimental results demonstrated that the MMDDI model outperforms existing technologies in terms of predictive accuracy, generalization ability, and interpretability. Specifically, the MMDDI model achieved an accuracy of 93% on the test set, and the area under the AUC-ROC curve reached 0.9505, showing excellent predictive performance. Furthermore, the model’s interpretability analysis revealed the complex relationships between drug features and interaction mechanisms, providing new insights for clinical medication decisions.</div></div><div><h3>Conclusion</h3><div>The MMDDI model not only improves the accuracy of DDI prediction but also provides significant scientific support for clinical medication safety and drug development by deeply analyzing the mechanisms of drug interactions. These findings have the potential to improve patient medication outcomes and contribute to the development of personalized medicine.</div></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"162 ","pages":"Article 104772"},"PeriodicalIF":4.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143006081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.jbi.2025.104774
Leif Huender , Mary Everett , John Shovic
Coccidioidomycosis (cocci), or more commonly known as Valley Fever, is a fungal infection caused by Coccidioides species that poses a significant public health challenge, particularly in the semi-arid regions of the Americas, with notable prevalence in California and Arizona. Previous epidemiological studies have established a correlation between cocci incidence and regional weather patterns, indicating that climatic factors influence the fungus’s life cycle and subsequent disease transmission. This study hypothesizes that Long Short-Term Memory (LSTM) and extended Long Short-Term Memory (xLSTM) models, known for their ability to capture long-term dependencies in time-series data, can outperform traditional statistical methods in predicting cocci outbreak cases. Our research analyzed daily meteorological features from 2001 to 2022 across 48 counties in California, covering diverse microclimates and cocci incidence. The study evaluated 846 LSTM models and 176 xLSTM models with various fine-tuning metrics. To ensure the reliability of our results, these advanced neural network architectures are cross analyzed with Baseline Regression and Multi-Layer Perceptron (MLP) models, providing a comprehensive comparative framework. We found that LSTM-type architectures outperform traditional methods, with xLSTM achieving the lowest test RMSE of 282.98 (95% CI: 259.2-306.8) compared to the baseline’s 468.51 (95% CI: 458.2-478.8), demonstrating a reduction of 39.60% in prediction error. While both LSTM (283.50, 95% CI: 259.7-307.3) and MLP (293.14, 95% CI: 268.3-318.0) also showed substantial improvements over the baseline, the overlapping confidence intervals suggest similar predictive capabilities among the advanced models. This improvement in predictive capability suggests a strong correlation between temporal microclimatic variations and regional cocci incidences. The increased predictive power of these models has significant public health implications, potentially informing strategies for cocci outbreak prevention and control. Moreover, this study represents the first application of the novel xLSTM architecture in epidemiological research and pioneers the evaluation of modern machine learning methods’ accuracy in predicting cocci outbreaks. These findings contribute to the ongoing efforts to address cocci, offering a new approach to understanding and potentially mitigating the impact of the disease in affected regions.
{"title":"Valley-Forecast: Forecasting Coccidioidomycosis incidence via enhanced LSTM models trained on comprehensive meteorological data","authors":"Leif Huender , Mary Everett , John Shovic","doi":"10.1016/j.jbi.2025.104774","DOIUrl":"10.1016/j.jbi.2025.104774","url":null,"abstract":"<div><div>Coccidioidomycosis (cocci), or more commonly known as Valley Fever, is a fungal infection caused by Coccidioides species that poses a significant public health challenge, particularly in the semi-arid regions of the Americas, with notable prevalence in California and Arizona. Previous epidemiological studies have established a correlation between cocci incidence and regional weather patterns, indicating that climatic factors influence the fungus’s life cycle and subsequent disease transmission. This study hypothesizes that Long Short-Term Memory (LSTM) and extended Long Short-Term Memory (xLSTM) models, known for their ability to capture long-term dependencies in time-series data, can outperform traditional statistical methods in predicting cocci outbreak cases. Our research analyzed daily meteorological features from 2001 to 2022 across 48 counties in California, covering diverse microclimates and cocci incidence. The study evaluated 846 LSTM models and 176 xLSTM models with various fine-tuning metrics. To ensure the reliability of our results, these advanced neural network architectures are cross analyzed with Baseline Regression and Multi-Layer Perceptron (MLP) models, providing a comprehensive comparative framework. We found that LSTM-type architectures outperform traditional methods, with xLSTM achieving the lowest test RMSE of 282.98 (95% CI: 259.2-306.8) compared to the baseline’s 468.51 (95% CI: 458.2-478.8), demonstrating a reduction of 39.60% in prediction error. While both LSTM (283.50, 95% CI: 259.7-307.3) and MLP (293.14, 95% CI: 268.3-318.0) also showed substantial improvements over the baseline, the overlapping confidence intervals suggest similar predictive capabilities among the advanced models. This improvement in predictive capability suggests a strong correlation between temporal microclimatic variations and regional cocci incidences. The increased predictive power of these models has significant public health implications, potentially informing strategies for cocci outbreak prevention and control. Moreover, this study represents the first application of the novel xLSTM architecture in epidemiological research and pioneers the evaluation of modern machine learning methods’ accuracy in predicting cocci outbreaks. These findings contribute to the ongoing efforts to address cocci, offering a new approach to understanding and potentially mitigating the impact of the disease in affected regions.</div></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"162 ","pages":"Article 104774"},"PeriodicalIF":4.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143006147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.jbi.2025.104778
Shixu Lin , Lucas Garay , Yining Hua , Zhijiang Guo , Wanxin Li , Minghui Li , Yujie Zhang , Xiaolin Xu , Jie Yang
Objective
Current studies leveraging social media data for disease monitoring face challenges like noisy colloquial language and insufficient tracking of user disease progression in longitudinal data settings. This study aims to develop a pipeline for collecting, cleaning, and analyzing large-scale longitudinal social media data for disease monitoring, with a focus on COVID-19 pandemic.
Materials and methods
This pipeline initiates by screening COVID-19 cases from tweets spanning February 1, 2020, to April 30, 2022. Longitudinal data is collected for each patient, two months before and three months after self-reporting. Symptoms are extracted using Name Entity Recognition (NER), followed by denoising with a combination of Graph Convolutional Network (GCN) and Bidirectional Encoder Representations from Transformers (BERT) model to retain only User-experienced Symptom Mentions (USM). Subsequently, symptoms are mapped to standardized medical concepts using the Unified Medical Language System (UMLS). Finally, this study conducts symptom pattern analysis and visualization to illustrate temporal changes in symptom prevalence and co-occurrence.
Results
This study identified 191,096 self-reported COVID-19-positive cases from COVID-19-related tweets and retrospectively collected 811,398,280 historical tweets, of which 2,120,964 contained symptoms information. After denoising, 39 % (832,287) of symptom-sharing tweets reflected user-experienced mentions. The trained USM model achieved an average F1 score of 0.927. Further analysis revealed a higher prevalence of upper respiratory tract symptoms during the Omicron period compared to the Delta and Wild-type periods. Additionally, there was a pronounced co-occurrence of lower respiratory tract and nervous system symptoms in the Wild-type strain and Delta variant.
Conclusion
This study established a robust framework for analyzing longitudinal social media data to monitor symptoms during a pandemic. By integrating denoising of user-experienced symptom mentions, our findings reveal the duration of different symptoms over time and by variant within a cohort of nearly 200,000 patients, providing critical insights into symptom trends that are often difficult to capture through traditional data source.
{"title":"Analysis of longitudinal social media for monitoring symptoms during a pandemic","authors":"Shixu Lin , Lucas Garay , Yining Hua , Zhijiang Guo , Wanxin Li , Minghui Li , Yujie Zhang , Xiaolin Xu , Jie Yang","doi":"10.1016/j.jbi.2025.104778","DOIUrl":"10.1016/j.jbi.2025.104778","url":null,"abstract":"<div><h3>Objective</h3><div>Current studies leveraging social media data for disease monitoring face challenges like noisy colloquial language and insufficient tracking of user disease progression in longitudinal data settings. This study aims to develop a pipeline for collecting, cleaning, and analyzing large-scale longitudinal social media data for disease monitoring, with a focus on COVID-19 pandemic.</div></div><div><h3>Materials and methods</h3><div>This pipeline initiates by screening COVID-19 cases from tweets spanning February 1, 2020, to April 30, 2022. Longitudinal data is collected for each patient, two months before and three months after self-reporting. Symptoms are extracted using Name Entity Recognition (NER), followed by denoising with a combination of Graph Convolutional Network (GCN) and Bidirectional Encoder Representations from Transformers (BERT) model to retain only User-experienced Symptom Mentions (USM). Subsequently, symptoms are mapped to standardized medical concepts using the Unified Medical Language System (UMLS). Finally, this study conducts symptom pattern analysis and visualization to illustrate temporal changes in symptom prevalence and co-occurrence.</div></div><div><h3>Results</h3><div>This study identified 191,096 self-reported COVID-19-positive cases from COVID-19-related tweets and retrospectively collected 811,398,280 historical tweets, of which 2,120,964 contained symptoms information. After denoising, 39 % (832,287) of symptom-sharing tweets reflected user-experienced mentions. The trained USM model achieved an average F1 score of 0.927. Further analysis revealed a higher prevalence of upper respiratory tract symptoms during the Omicron period compared to the Delta and Wild-type periods. Additionally, there was a pronounced co-occurrence of lower respiratory tract and nervous system symptoms in the Wild-type strain and Delta variant.</div></div><div><h3>Conclusion</h3><div>This study established a robust framework for analyzing longitudinal social media data to monitor symptoms during a pandemic. By integrating denoising of user-experienced symptom mentions, our findings reveal the duration of different symptoms over time and by variant within a cohort of nearly 200,000 patients, providing critical insights into symptom trends that are often difficult to capture through traditional data source.</div></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"162 ","pages":"Article 104778"},"PeriodicalIF":4.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143006056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.jbi.2024.104768
Jun Wen , Hao Xue , Everett Rush , Vidul A. Panickan , Tianrun Cai , Doudou Zhou , Yuk-Lam Ho , Lauren Costa , Edmon Begoli , Chuan Hong , J. Michael Gaziano , Kelly Cho , Katherine P. Liao , Junwei Lu , Tianxi Cai
Motivation:
The increasing availability of Electronic Health Record (EHR) systems has created enormous potential for translational research. Recent developments in representation learning techniques have led to effective large-scale representations of EHR concepts along with knowledge graphs that empower downstream EHR studies. However, most existing methods require training with patient-level data, limiting their abilities to expand the training with multi-institutional EHR data. On the other hand, scalable approaches that only require summary-level data do not incorporate temporal dependencies between concepts.
Methods:
We introduce a DirectiOnal Medical Embedding (DOME) algorithm to encode temporally directional relationships between medical concepts, using summary-level EHR data. Specifically, DOME first aggregates patient-level EHR data into an asymmetric co-occurrence matrix. Then it computes two Positive Pointwise Mutual Information (PPMI) matrices to correspondingly encode the pairwise prior and posterior dependencies between medical concepts. Following that, a joint matrix factorization is performed on the two PPMI matrices, which results in three vectors for each concept: a semantic embedding and two directional context embeddings. They collectively provide a comprehensive depiction of the temporal relationship between EHR concepts.
Results:
We highlight the advantages and translational potential of DOME through three sets of validation studies. First, DOME consistently improves existing direction-agnostic embedding vectors for disease risk prediction in several diseases, for example achieving a relative gain of 5.5% in the area under the receiver operating characteristic (AUROC) for lung cancer. Second, DOME excels in directional drug-disease relationship inference by successfully differentiating between drug side effects and indications, correspondingly achieving relative AUROC gain over the state-of-the-art methods by 10.8% and 6.6%. Finally, DOME effectively constructs directional knowledge graphs, which distinguish disease risk factors from comorbidities, thereby revealing disease progression trajectories. The source codes are provided at https://github.com/celehs/Directional-EHR-embedding.
{"title":"DOME: Directional medical embedding vectors from Electronic Health Records","authors":"Jun Wen , Hao Xue , Everett Rush , Vidul A. Panickan , Tianrun Cai , Doudou Zhou , Yuk-Lam Ho , Lauren Costa , Edmon Begoli , Chuan Hong , J. Michael Gaziano , Kelly Cho , Katherine P. Liao , Junwei Lu , Tianxi Cai","doi":"10.1016/j.jbi.2024.104768","DOIUrl":"10.1016/j.jbi.2024.104768","url":null,"abstract":"<div><h3>Motivation:</h3><div>The increasing availability of Electronic Health Record (EHR) systems has created enormous potential for translational research. Recent developments in representation learning techniques have led to effective large-scale representations of EHR concepts along with knowledge graphs that empower downstream EHR studies. However, most existing methods require training with patient-level data, limiting their abilities to expand the training with multi-institutional EHR data. On the other hand, scalable approaches that only require summary-level data do not incorporate temporal dependencies between concepts.</div></div><div><h3>Methods:</h3><div>We introduce a DirectiOnal Medical Embedding (DOME) algorithm to encode temporally directional relationships between medical concepts, using summary-level EHR data. Specifically, DOME first aggregates patient-level EHR data into an asymmetric co-occurrence matrix. Then it computes two Positive Pointwise Mutual Information (PPMI) matrices to correspondingly encode the pairwise prior and posterior dependencies between medical concepts. Following that, a joint matrix factorization is performed on the two PPMI matrices, which results in three vectors for each concept: a semantic embedding and two directional context embeddings. They collectively provide a comprehensive depiction of the temporal relationship between EHR concepts.</div></div><div><h3>Results:</h3><div>We highlight the advantages and translational potential of DOME through three sets of validation studies. First, DOME consistently improves existing direction-agnostic embedding vectors for disease risk prediction in several diseases, for example achieving a relative gain of 5.5% in the area under the receiver operating characteristic (AUROC) for lung cancer. Second, DOME excels in directional drug-disease relationship inference by successfully differentiating between drug side effects and indications, correspondingly achieving relative AUROC gain over the state-of-the-art methods by 10.8% and 6.6%. Finally, DOME effectively constructs directional knowledge graphs, which distinguish disease risk factors from comorbidities, thereby revealing disease progression trajectories. The source codes are provided at <span><span>https://github.com/celehs/Directional-EHR-embedding</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"162 ","pages":"Article 104768"},"PeriodicalIF":4.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142926986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.jbi.2025.104779
Zhenzhong Liu , Kelong Chen , Shuai Wang , Yijun Xiao , Guobin Zhang
Objective: The application of artificial intelligence (AI) in health care has led to a surge of interest in surgical process modeling (SPM). The objective of this study is to investigate the role of deep learning in recognizing surgical workflows and extracting reliable patterns from datasets used in minimally invasive surgery, thereby advancing the development of context-aware intelligent systems in endoscopic surgeries. Methods: We conducted a comprehensive search of articles related to SPM from 2018 to April 2024 in the PubMed, Web of Science, Google Scholar, and IEEE Xplore databases. We chose surgical videos with annotations to describe the article on surgical process modeling and focused on examining the specific methods and research results of each study. Results: The search initially yielded 2937 articles. After filtering on the basis of the relevance of titles, abstracts, and content, 59 articles were selected for full-text review. These studies highlight the widespread adoption of neural networks, and transformers for surgical workflow analysis (SWA). They focus on minimally invasive surgeries performed with laparoscopes and microscopes. However, the process of surgical annotation lacks detailed description, and there are significant differences in the annotation process for different surgical procedures. Conclusion: Time and spatial sequences are key factors determining the identification of surgical phase. RNN, TCN, and transformer networks are commonly used to extract long-distance temporal relationships. Multimodal data input is beneficial, as it combines information from surgical instruments. However, publicly available datasets often lack clinical knowledge, and establishing large annotated datasets for surgery remains a challenge. To reduce annotation costs, methods such as semi supervised learning, self-supervised learning, contrastive learning, transfer learning, and active learning are commonly used.
目的:人工智能(AI)在医疗保健领域的应用引起了人们对手术过程建模(SPM)的兴趣。本研究的目的是探讨深度学习在识别手术工作流程和从微创手术中使用的数据集中提取可靠模式中的作用,从而促进内镜手术中上下文感知智能系统的发展。方法:我们在PubMed、Web of Science、b谷歌Scholar和IEEE explore数据库中全面检索2018年至2024年4月与SPM相关的文章。我们选择带有注释的手术视频来描述这篇关于手术过程建模的文章,并重点考察了每项研究的具体方法和研究结果。结果:搜索最初产生了2937篇文章。根据题目、摘要和内容的相关性进行筛选后,选择59篇文章进行全文评审。这些研究强调了神经网络和变压器在手术工作流程分析(SWA)中的广泛应用。他们专注于用腹腔镜和显微镜进行的微创手术。然而,手术注释的过程缺乏详细的描述,不同手术过程的注释过程存在显著差异。结论:时间和空间序列是确定手术期的关键因素。RNN、TCN和变压器网络通常用于提取远距离时间关系。多模式数据输入是有益的,因为它结合了手术器械的信息。然而,公开可用的数据集往往缺乏临床知识,建立大型外科注释数据集仍然是一个挑战。为了降低标注成本,通常使用半监督学习、自监督学习、对比学习、迁移学习和主动学习等方法。
{"title":"Deep learning in surgical process modeling: A systematic review of workflow recognition","authors":"Zhenzhong Liu , Kelong Chen , Shuai Wang , Yijun Xiao , Guobin Zhang","doi":"10.1016/j.jbi.2025.104779","DOIUrl":"10.1016/j.jbi.2025.104779","url":null,"abstract":"<div><div>Objective: The application of artificial intelligence (AI) in health care has led to a surge of interest in surgical process modeling (SPM). The objective of this study is to investigate the role of deep learning in recognizing surgical workflows and extracting reliable patterns from datasets used in minimally invasive surgery, thereby advancing the development of context-aware intelligent systems in endoscopic surgeries. Methods<strong>:</strong> We conducted a comprehensive search of articles related to SPM from 2018 to April 2024 in the PubMed, Web of Science, Google Scholar, and IEEE Xplore databases. We chose surgical videos with annotations to describe the article on surgical process modeling and focused on examining the specific methods and research results of each study. Results: The search initially yielded 2937 articles. After filtering on the basis of the relevance of titles, abstracts, and content, 59 articles were selected for full-text review. These studies highlight the widespread adoption of neural networks, and transformers for surgical workflow analysis (SWA). They focus on minimally invasive surgeries performed with laparoscopes and microscopes. However, the process of surgical annotation lacks detailed description, and there are significant differences in the annotation process for different surgical procedures. Conclusion: Time and spatial sequences are key factors determining the identification of surgical phase. RNN, TCN, and transformer networks are commonly used to extract long-distance temporal relationships. Multimodal data input is beneficial, as it combines information from surgical instruments. However, publicly available datasets often lack clinical knowledge, and establishing large annotated datasets for surgery remains a challenge. To reduce annotation costs, methods such as semi supervised learning, self-supervised learning, contrastive learning, transfer learning, and active learning are commonly used.</div></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"162 ","pages":"Article 104779"},"PeriodicalIF":4.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143006134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}