Pub Date : 2025-02-12DOI: 10.1109/JBHI.2025.3540894
Jingjing Gao, Sutao Song
Autism Spectrum Disorder (ASD) is a complex neurodevelopmental disorder, and precise prediction using imaging or other biological information is of great significance. However, predicting ASD in individuals presents the following challenges: first, there is extensive heterogeneity among subjects; second, existing models fail to fully utilize rs-fMRI and non-imaging information, resulting in less accurate classification results. Therefore, this paper proposes a novel framework, named HE-MF, which consists of a Hierarchical Feature Extraction Module and a Multimodal Deep Feature Integration Module. The Hierarchical Feature Extraction Module aims to achieve multi-level, fine-grained feature extraction and enhance the model's discriminative ability by progressively extracting the most discriminative functional connectivity features at both the intra-group and overall subject levels. The Multimodal Deep Integration Module extracts common and distinctive features based on rs-fMRI and non-imaging information through two separate channels, and utilizes an attention mechanism for dynamic weight allocation, thereby achieving deep feature fusion and significantly improving the model's predictive performance. Experimental results on the ABIDE public dataset show that the HE-MF model achieves an accuracy of 95.17% in the ASD identification task, significantly outperforming existing state-of-the-art methods, demonstrating its effectiveness and superiority. To verify the model's generalization capability, we successfully applied it to relevant tasks in the ADNI dataset, further demonstrating the HE-MF model's outstanding performance in feature learning and generalization capabilities.
{"title":"A Hierarchical Feature Extraction and Multimodal Deep Feature Integration-Based Model for Autism Spectrum Disorder Identification.","authors":"Jingjing Gao, Sutao Song","doi":"10.1109/JBHI.2025.3540894","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3540894","url":null,"abstract":"<p><p>Autism Spectrum Disorder (ASD) is a complex neurodevelopmental disorder, and precise prediction using imaging or other biological information is of great significance. However, predicting ASD in individuals presents the following challenges: first, there is extensive heterogeneity among subjects; second, existing models fail to fully utilize rs-fMRI and non-imaging information, resulting in less accurate classification results. Therefore, this paper proposes a novel framework, named HE-MF, which consists of a Hierarchical Feature Extraction Module and a Multimodal Deep Feature Integration Module. The Hierarchical Feature Extraction Module aims to achieve multi-level, fine-grained feature extraction and enhance the model's discriminative ability by progressively extracting the most discriminative functional connectivity features at both the intra-group and overall subject levels. The Multimodal Deep Integration Module extracts common and distinctive features based on rs-fMRI and non-imaging information through two separate channels, and utilizes an attention mechanism for dynamic weight allocation, thereby achieving deep feature fusion and significantly improving the model's predictive performance. Experimental results on the ABIDE public dataset show that the HE-MF model achieves an accuracy of 95.17% in the ASD identification task, significantly outperforming existing state-of-the-art methods, demonstrating its effectiveness and superiority. To verify the model's generalization capability, we successfully applied it to relevant tasks in the ADNI dataset, further demonstrating the HE-MF model's outstanding performance in feature learning and generalization capabilities.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In clinical acupuncture practice, needle twirling (NT) and needle retention (NR) are strategically combined to achieve different therapeutic effects, highlighting the importance of distinguishing between different acupuncture states. Scalp EEG has been proven significantly relevant to brain activity and acupuncture stimulation. In this work, we designed an acupuncture paradigm to collect scalp EEG to study the differences in EEG changes during different acupuncture states. Since deep learning (DL) has been increasingly used in EEG analysis, we propose the Acupuncture Transformer Detector (ATD), a model based on Convolutional Neural Networks (CNN) and Transformer technology. ATD encapsulates the local and global features of EEG under the acupuncture states of Zusanli acupoint (ST-36) in an end-to-end classification framework. The experiment results from 28 healthy participants show that the proposed model can efficiently classify the EEG in different states, with an accuracy of . In this study, time-frequency analysis revealed that power changes were mainly confined to the delta frequency band under different acupuncture states. Brain topography revealed that ST-36 was activated primarily on the left frontal and parieto-occipital areas. This method provides new ideas for automatic recognition of acupuncture status from the perspective of DL, offering new solutions for standardizing acupuncture procedures.
{"title":"Acupuncture State Detection at Zusanli (ST-36) Based on Scalp EEG and Transformer.","authors":"Wenhao Rao, Meiyan Xu, Haochen Wang, Weicheng Hua, Jiayang Guo, Yongheng Zhang, Haibin Zhu, Ziqiu Zhou, Jiawei Xiong, Jianbin Zhang, Yijie Pan, Peipei Gu, Duo Chen","doi":"10.1109/JBHI.2025.3540924","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3540924","url":null,"abstract":"<p><p>In clinical acupuncture practice, needle twirling (NT) and needle retention (NR) are strategically combined to achieve different therapeutic effects, highlighting the importance of distinguishing between different acupuncture states. Scalp EEG has been proven significantly relevant to brain activity and acupuncture stimulation. In this work, we designed an acupuncture paradigm to collect scalp EEG to study the differences in EEG changes during different acupuncture states. Since deep learning (DL) has been increasingly used in EEG analysis, we propose the Acupuncture Transformer Detector (ATD), a model based on Convolutional Neural Networks (CNN) and Transformer technology. ATD encapsulates the local and global features of EEG under the acupuncture states of Zusanli acupoint (ST-36) in an end-to-end classification framework. The experiment results from 28 healthy participants show that the proposed model can efficiently classify the EEG in different states, with an accuracy of . In this study, time-frequency analysis revealed that power changes were mainly confined to the delta frequency band under different acupuncture states. Brain topography revealed that ST-36 was activated primarily on the left frontal and parieto-occipital areas. This method provides new ideas for automatic recognition of acupuncture status from the perspective of DL, offering new solutions for standardizing acupuncture procedures.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Drug-drug interactions (DDIs) are a significant focus in biomedical research and clinical practice due to their potential to compromise treatment outcomes or cause adverse effects. While deep learning approaches have advanced DDI extraction, challenges such as severe class imbalance and the complexity of biomedical relationships persist. This study introduces BioFocal-DDI, a framework combining BioGPT for data augmentation, BioBERT and BiLSTM for contextual and sequential feature extraction, and Relational Graph Convolutional Networks (ReGCN) for relational modeling. To address class imbalance, a Focal Loss-based Attention mechanism is employed to enhance learning on underrepresented and challenging instances. Evaluated on the DDI Extraction 2013 dataset, BioFocal-DDI achieves a precision of 86.75%, recall of 86.53%, and an F1 Score of 86.64%. These results suggest that the proposed method is effective in improving DDI extraction. All our code and data have been publicly released at https://github.com/Hero-Legend/BioFocal-DDI.
{"title":"Optimized Drug-Drug Interaction Extraction With BioGPT and Focal Loss-Based Attention.","authors":"Zhu Yuan, Shuailiang Zhang, Huiyun Zhang, Ping Xie, Yaxun Jia","doi":"10.1109/JBHI.2025.3540861","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3540861","url":null,"abstract":"<p><p>Drug-drug interactions (DDIs) are a significant focus in biomedical research and clinical practice due to their potential to compromise treatment outcomes or cause adverse effects. While deep learning approaches have advanced DDI extraction, challenges such as severe class imbalance and the complexity of biomedical relationships persist. This study introduces BioFocal-DDI, a framework combining BioGPT for data augmentation, BioBERT and BiLSTM for contextual and sequential feature extraction, and Relational Graph Convolutional Networks (ReGCN) for relational modeling. To address class imbalance, a Focal Loss-based Attention mechanism is employed to enhance learning on underrepresented and challenging instances. Evaluated on the DDI Extraction 2013 dataset, BioFocal-DDI achieves a precision of 86.75%, recall of 86.53%, and an F1 Score of 86.64%. These results suggest that the proposed method is effective in improving DDI extraction. All our code and data have been publicly released at https://github.com/Hero-Legend/BioFocal-DDI.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-11DOI: 10.1109/JBHI.2025.3540574
Yuan Gao, Tao Tan, Xin Wang, Regina Beets-Tan, Tianyu Zhang, Luyi Han, Antonio Portaluri, Chunyao Lu, Xinglong Liang, Jonas Teuwen, Hong-Yu Zhou, Ritse Mann
Longitudinal medical imaging is crucial for monitoring neoadjuvant therapy (NAT) response in clinical practice. However, mainstream artificial intelligence (AI) methods for disease monitoring commonly rely on extensive segmentation labels to evaluate lesion progression. While self-supervised vision-language (VL) learning efficiently captures medical knowledge from radiology reports, existing methods focus on single time points, missing opportunities to leverage temporal self-supervision for disease progression tracking. In addition, extracting dynamic progression from longitudinal unannotated images with corresponding textual data poses challenges. In this work, we explicitly account for longitudinal NAT examinations and accompanying reports, encompassing scans before NAT and follow-up scans during mid-/post-NAT. We introduce the multi-modal longitudinal representation learning pipeline (MLRL), a temporal foundation model, that employs multi-scale self-supervision scheme, including single-time scale vision-text alignment (VTA) learning and multi-time scale visual/textual progress (TVP/TTP) learning to extract temporal representations from each modality, thereby facilitates the downstream evaluation of tumor progress. Our method is evaluated against several state-of-the-art self-supervised longitudinal learning and multi-modal VL methods. Results from internal and external datasets demonstrate that our approach not only enhances label efficiency across the zero-, few- and full-shot regime experiments but also significantly improves tumor response prediction in diverse treatment scenarios. Furthermore, MLRL enables interpretable visual tracking of progressive areas in temporal examinations, offering insights into longitudinal VL foundation tools and potentially facilitating the temporal clinical decision-making process.
{"title":"Multi-modal Longitudinal Representation Learning for Predicting Neoadjuvant Therapy Response in Breast Cancer Treatment.","authors":"Yuan Gao, Tao Tan, Xin Wang, Regina Beets-Tan, Tianyu Zhang, Luyi Han, Antonio Portaluri, Chunyao Lu, Xinglong Liang, Jonas Teuwen, Hong-Yu Zhou, Ritse Mann","doi":"10.1109/JBHI.2025.3540574","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3540574","url":null,"abstract":"<p><p>Longitudinal medical imaging is crucial for monitoring neoadjuvant therapy (NAT) response in clinical practice. However, mainstream artificial intelligence (AI) methods for disease monitoring commonly rely on extensive segmentation labels to evaluate lesion progression. While self-supervised vision-language (VL) learning efficiently captures medical knowledge from radiology reports, existing methods focus on single time points, missing opportunities to leverage temporal self-supervision for disease progression tracking. In addition, extracting dynamic progression from longitudinal unannotated images with corresponding textual data poses challenges. In this work, we explicitly account for longitudinal NAT examinations and accompanying reports, encompassing scans before NAT and follow-up scans during mid-/post-NAT. We introduce the multi-modal longitudinal representation learning pipeline (MLRL), a temporal foundation model, that employs multi-scale self-supervision scheme, including single-time scale vision-text alignment (VTA) learning and multi-time scale visual/textual progress (TVP/TTP) learning to extract temporal representations from each modality, thereby facilitates the downstream evaluation of tumor progress. Our method is evaluated against several state-of-the-art self-supervised longitudinal learning and multi-modal VL methods. Results from internal and external datasets demonstrate that our approach not only enhances label efficiency across the zero-, few- and full-shot regime experiments but also significantly improves tumor response prediction in diverse treatment scenarios. Furthermore, MLRL enables interpretable visual tracking of progressive areas in temporal examinations, offering insights into longitudinal VL foundation tools and potentially facilitating the temporal clinical decision-making process.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-11DOI: 10.1109/JBHI.2025.3540899
Jing Wu, Simon Mayer, Simeon Pilz, Yasmine S Antille, Jan L Albert, Melanie Stoll, Kimberly Garcia, Klaus Fuchs, Lia Bally, Lukas Eichelberger, Tanja Schneider, Verena Tiefenbeck, Sybilla Merian, Freya Orban
Unhealthy dietary habits are a major preventable risk factor for widespread non-communicable diseases (NCD). Diet counseling is effective in managing diet-related NCDs, but constrained by its manual nature and limited (clinical) resources. To address these challenges, we propose a fully automated diet counseling system FoodCoach. It monitors people's food purchases using digital receipts from loyalty cards and provides structured dietary recommendations. We introduce the FoodCoach system's recommender algorithm and architecture, along with evaluation results from a two-arm randomized controlled trial involving 61 participants. The trial results demonstrate the technical feasibility and potential for scalable, fully automated diet counseling, despite not showing a significant change in participants' food purchase healthiness. We further show how others can deploy and extend the FoodCoach system in their own context and provide all relevant component implementations. Our core research contributions are: 1) a novel dietary recommendation algorithm designed and implemented with clinical experts, and 2) a scalable system architecture that employs a knowledge graph for enhanced interoperability and applicability to diverse domains and data sources. From a practical perspective, FoodCoach can augment traditional diet counseling through automatic diet monitoring and evaluation modules. Additionally, it streamlines the counseling process, conserving clinical resources, and ultimately contributing to the reduction of NCD prevalence.
{"title":"FoodCoach: Fully Automated Diet Counseling.","authors":"Jing Wu, Simon Mayer, Simeon Pilz, Yasmine S Antille, Jan L Albert, Melanie Stoll, Kimberly Garcia, Klaus Fuchs, Lia Bally, Lukas Eichelberger, Tanja Schneider, Verena Tiefenbeck, Sybilla Merian, Freya Orban","doi":"10.1109/JBHI.2025.3540899","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3540899","url":null,"abstract":"<p><p>Unhealthy dietary habits are a major preventable risk factor for widespread non-communicable diseases (NCD). Diet counseling is effective in managing diet-related NCDs, but constrained by its manual nature and limited (clinical) resources. To address these challenges, we propose a fully automated diet counseling system FoodCoach. It monitors people's food purchases using digital receipts from loyalty cards and provides structured dietary recommendations. We introduce the FoodCoach system's recommender algorithm and architecture, along with evaluation results from a two-arm randomized controlled trial involving 61 participants. The trial results demonstrate the technical feasibility and potential for scalable, fully automated diet counseling, despite not showing a significant change in participants' food purchase healthiness. We further show how others can deploy and extend the FoodCoach system in their own context and provide all relevant component implementations. Our core research contributions are: 1) a novel dietary recommendation algorithm designed and implemented with clinical experts, and 2) a scalable system architecture that employs a knowledge graph for enhanced interoperability and applicability to diverse domains and data sources. From a practical perspective, FoodCoach can augment traditional diet counseling through automatic diet monitoring and evaluation modules. Additionally, it streamlines the counseling process, conserving clinical resources, and ultimately contributing to the reduction of NCD prevalence.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-10DOI: 10.1109/JBHI.2024.3523739
{"title":"IEEE Journal of Biomedical and Health Informatics Publication Information","authors":"","doi":"10.1109/JBHI.2024.3523739","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3523739","url":null,"abstract":"","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"29 2","pages":"C2-C2"},"PeriodicalIF":6.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10879099","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T cell receptors (TCRs) play a crucial role in numerous immunotherapies targeting tumor cells. However, their acquisition and optimization present significant challenges, involving laborious and time-consuming wet lab experimental resource. Deep generative models have demonstrated remarkable capabilities in functional protein sequence generation, offering a promising solution for enhancing the acquisition of specific TCR sequences. Here, we propose GRATCR, a framework incorporates two pre-trained modules through a novel "grafting" strategy, to de-novo generate TCR sequences targeting specific epitopes. Experimental results demonstrate that TCRs generated by GRATCR exhibit higher specificity toward desired epitopes and are more biologically functional compared with the state-of-the-art model, by using significantly fewer training data. Additionally, the generated sequences display novelty compared to natural sequences, and the interpretability evaluation further confirmed that the model is capable of capturing important binding patterns.
{"title":"GRATCR: epitope-Specific T Cell receptor Sequence generation With Data-efficient Pre-Trained Models.","authors":"Zhenghong Zhou, Junwei Chen, Shenggeng Lin, Liang Hong, Dong-Qing Wei, Yi Xiong","doi":"10.1109/JBHI.2024.3514089","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3514089","url":null,"abstract":"<p><p>T cell receptors (TCRs) play a crucial role in numerous immunotherapies targeting tumor cells. However, their acquisition and optimization present significant challenges, involving laborious and time-consuming wet lab experimental resource. Deep generative models have demonstrated remarkable capabilities in functional protein sequence generation, offering a promising solution for enhancing the acquisition of specific TCR sequences. Here, we propose GRATCR, a framework incorporates two pre-trained modules through a novel \"grafting\" strategy, to de-novo generate TCR sequences targeting specific epitopes. Experimental results demonstrate that TCRs generated by GRATCR exhibit higher specificity toward desired epitopes and are more biologically functional compared with the state-of-the-art model, by using significantly fewer training data. Additionally, the generated sequences display novelty compared to natural sequences, and the interpretability evaluation further confirmed that the model is capable of capturing important binding patterns.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-10DOI: 10.1109/JBHI.2024.3523751
Maria Fernanda Cabrera;Tayo Obafemi-Ajayi;Ahmed Metwally;Bobak J Mortazavi
{"title":"Guest Editorial: Deep Medicine and AI for Health","authors":"Maria Fernanda Cabrera;Tayo Obafemi-Ajayi;Ahmed Metwally;Bobak J Mortazavi","doi":"10.1109/JBHI.2024.3523751","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3523751","url":null,"abstract":"","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"29 2","pages":"737-740"},"PeriodicalIF":6.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10879096","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-10DOI: 10.1109/JBHI.2025.3530794
Wei Lan, Zhentao Tang, Mingyang Liu, Qingfeng Chen, Wei Peng, Yiping Phoebe Chen, Yi Pan
With the rapid development of Large Language Model (LLM) technology, it has become an indispensable force in biomedical data analysis research. However, biomedical researchers currently have limited knowledge about LLM. Therefore, there is an urgent need for a summary of LLM applications in biomedical data analysis. Herein, we propose this review by summarizing the latest research work on LLM in biomedicine. In this review, LLM techniques are first outlined. We then discuss biomedical datasets and frameworks for biomedical data analysis, followed by a detailed analysis of LLM applications in genomics, proteomics, transcriptomics, radiomics, single-cell analysis, medical texts and drug discovery. Finally, the challenges of LLM in biomedical data analysis are discussed. In summary, this review is intended for researchers interested in LLM technology and aims to help them understand and apply LLM in biomedical data analysis research.
{"title":"The Large Language Models on Biomedical Data Analysis: A Survey.","authors":"Wei Lan, Zhentao Tang, Mingyang Liu, Qingfeng Chen, Wei Peng, Yiping Phoebe Chen, Yi Pan","doi":"10.1109/JBHI.2025.3530794","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3530794","url":null,"abstract":"<p><p>With the rapid development of Large Language Model (LLM) technology, it has become an indispensable force in biomedical data analysis research. However, biomedical researchers currently have limited knowledge about LLM. Therefore, there is an urgent need for a summary of LLM applications in biomedical data analysis. Herein, we propose this review by summarizing the latest research work on LLM in biomedicine. In this review, LLM techniques are first outlined. We then discuss biomedical datasets and frameworks for biomedical data analysis, followed by a detailed analysis of LLM applications in genomics, proteomics, transcriptomics, radiomics, single-cell analysis, medical texts and drug discovery. Finally, the challenges of LLM in biomedical data analysis are discussed. In summary, this review is intended for researchers interested in LLM technology and aims to help them understand and apply LLM in biomedical data analysis research.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-10DOI: 10.1109/JBHI.2024.3523743
{"title":"IEEE Journal of Biomedical and Health Informatics Information for Authors","authors":"","doi":"10.1109/JBHI.2024.3523743","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3523743","url":null,"abstract":"","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"29 2","pages":"C3-C3"},"PeriodicalIF":6.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10879095","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}