首页 > 最新文献

IEEE Journal of Biomedical and Health Informatics最新文献

英文 中文
A Hierarchical Feature Extraction and Multimodal Deep Feature Integration-Based Model for Autism Spectrum Disorder Identification.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-12 DOI: 10.1109/JBHI.2025.3540894
Jingjing Gao, Sutao Song

Autism Spectrum Disorder (ASD) is a complex neurodevelopmental disorder, and precise prediction using imaging or other biological information is of great significance. However, predicting ASD in individuals presents the following challenges: first, there is extensive heterogeneity among subjects; second, existing models fail to fully utilize rs-fMRI and non-imaging information, resulting in less accurate classification results. Therefore, this paper proposes a novel framework, named HE-MF, which consists of a Hierarchical Feature Extraction Module and a Multimodal Deep Feature Integration Module. The Hierarchical Feature Extraction Module aims to achieve multi-level, fine-grained feature extraction and enhance the model's discriminative ability by progressively extracting the most discriminative functional connectivity features at both the intra-group and overall subject levels. The Multimodal Deep Integration Module extracts common and distinctive features based on rs-fMRI and non-imaging information through two separate channels, and utilizes an attention mechanism for dynamic weight allocation, thereby achieving deep feature fusion and significantly improving the model's predictive performance. Experimental results on the ABIDE public dataset show that the HE-MF model achieves an accuracy of 95.17% in the ASD identification task, significantly outperforming existing state-of-the-art methods, demonstrating its effectiveness and superiority. To verify the model's generalization capability, we successfully applied it to relevant tasks in the ADNI dataset, further demonstrating the HE-MF model's outstanding performance in feature learning and generalization capabilities.

{"title":"A Hierarchical Feature Extraction and Multimodal Deep Feature Integration-Based Model for Autism Spectrum Disorder Identification.","authors":"Jingjing Gao, Sutao Song","doi":"10.1109/JBHI.2025.3540894","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3540894","url":null,"abstract":"<p><p>Autism Spectrum Disorder (ASD) is a complex neurodevelopmental disorder, and precise prediction using imaging or other biological information is of great significance. However, predicting ASD in individuals presents the following challenges: first, there is extensive heterogeneity among subjects; second, existing models fail to fully utilize rs-fMRI and non-imaging information, resulting in less accurate classification results. Therefore, this paper proposes a novel framework, named HE-MF, which consists of a Hierarchical Feature Extraction Module and a Multimodal Deep Feature Integration Module. The Hierarchical Feature Extraction Module aims to achieve multi-level, fine-grained feature extraction and enhance the model's discriminative ability by progressively extracting the most discriminative functional connectivity features at both the intra-group and overall subject levels. The Multimodal Deep Integration Module extracts common and distinctive features based on rs-fMRI and non-imaging information through two separate channels, and utilizes an attention mechanism for dynamic weight allocation, thereby achieving deep feature fusion and significantly improving the model's predictive performance. Experimental results on the ABIDE public dataset show that the HE-MF model achieves an accuracy of 95.17% in the ASD identification task, significantly outperforming existing state-of-the-art methods, demonstrating its effectiveness and superiority. To verify the model's generalization capability, we successfully applied it to relevant tasks in the ADNI dataset, further demonstrating the HE-MF model's outstanding performance in feature learning and generalization capabilities.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Acupuncture State Detection at Zusanli (ST-36) Based on Scalp EEG and Transformer.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-12 DOI: 10.1109/JBHI.2025.3540924
Wenhao Rao, Meiyan Xu, Haochen Wang, Weicheng Hua, Jiayang Guo, Yongheng Zhang, Haibin Zhu, Ziqiu Zhou, Jiawei Xiong, Jianbin Zhang, Yijie Pan, Peipei Gu, Duo Chen

In clinical acupuncture practice, needle twirling (NT) and needle retention (NR) are strategically combined to achieve different therapeutic effects, highlighting the importance of distinguishing between different acupuncture states. Scalp EEG has been proven significantly relevant to brain activity and acupuncture stimulation. In this work, we designed an acupuncture paradigm to collect scalp EEG to study the differences in EEG changes during different acupuncture states. Since deep learning (DL) has been increasingly used in EEG analysis, we propose the Acupuncture Transformer Detector (ATD), a model based on Convolutional Neural Networks (CNN) and Transformer technology. ATD encapsulates the local and global features of EEG under the acupuncture states of Zusanli acupoint (ST-36) in an end-to-end classification framework. The experiment results from 28 healthy participants show that the proposed model can efficiently classify the EEG in different states, with an accuracy of . In this study, time-frequency analysis revealed that power changes were mainly confined to the delta frequency band under different acupuncture states. Brain topography revealed that ST-36 was activated primarily on the left frontal and parieto-occipital areas. This method provides new ideas for automatic recognition of acupuncture status from the perspective of DL, offering new solutions for standardizing acupuncture procedures.

{"title":"Acupuncture State Detection at Zusanli (ST-36) Based on Scalp EEG and Transformer.","authors":"Wenhao Rao, Meiyan Xu, Haochen Wang, Weicheng Hua, Jiayang Guo, Yongheng Zhang, Haibin Zhu, Ziqiu Zhou, Jiawei Xiong, Jianbin Zhang, Yijie Pan, Peipei Gu, Duo Chen","doi":"10.1109/JBHI.2025.3540924","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3540924","url":null,"abstract":"<p><p>In clinical acupuncture practice, needle twirling (NT) and needle retention (NR) are strategically combined to achieve different therapeutic effects, highlighting the importance of distinguishing between different acupuncture states. Scalp EEG has been proven significantly relevant to brain activity and acupuncture stimulation. In this work, we designed an acupuncture paradigm to collect scalp EEG to study the differences in EEG changes during different acupuncture states. Since deep learning (DL) has been increasingly used in EEG analysis, we propose the Acupuncture Transformer Detector (ATD), a model based on Convolutional Neural Networks (CNN) and Transformer technology. ATD encapsulates the local and global features of EEG under the acupuncture states of Zusanli acupoint (ST-36) in an end-to-end classification framework. The experiment results from 28 healthy participants show that the proposed model can efficiently classify the EEG in different states, with an accuracy of . In this study, time-frequency analysis revealed that power changes were mainly confined to the delta frequency band under different acupuncture states. Brain topography revealed that ST-36 was activated primarily on the left frontal and parieto-occipital areas. This method provides new ideas for automatic recognition of acupuncture status from the perspective of DL, offering new solutions for standardizing acupuncture procedures.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized Drug-Drug Interaction Extraction With BioGPT and Focal Loss-Based Attention.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-11 DOI: 10.1109/JBHI.2025.3540861
Zhu Yuan, Shuailiang Zhang, Huiyun Zhang, Ping Xie, Yaxun Jia

Drug-drug interactions (DDIs) are a significant focus in biomedical research and clinical practice due to their potential to compromise treatment outcomes or cause adverse effects. While deep learning approaches have advanced DDI extraction, challenges such as severe class imbalance and the complexity of biomedical relationships persist. This study introduces BioFocal-DDI, a framework combining BioGPT for data augmentation, BioBERT and BiLSTM for contextual and sequential feature extraction, and Relational Graph Convolutional Networks (ReGCN) for relational modeling. To address class imbalance, a Focal Loss-based Attention mechanism is employed to enhance learning on underrepresented and challenging instances. Evaluated on the DDI Extraction 2013 dataset, BioFocal-DDI achieves a precision of 86.75%, recall of 86.53%, and an F1 Score of 86.64%. These results suggest that the proposed method is effective in improving DDI extraction. All our code and data have been publicly released at https://github.com/Hero-Legend/BioFocal-DDI.

药物间相互作用(DDI)可能会影响治疗效果或导致不良反应,因此是生物医学研究和临床实践中的一个重要焦点。虽然深度学习方法推进了 DDI 提取,但严重的类不平衡和生物医学关系的复杂性等挑战依然存在。本研究介绍了 BioFocal-DDI,这是一个结合了用于数据增强的 BioGPT、用于上下文和序列特征提取的 BioBERT 和 BiLSTM 以及用于关系建模的关系图卷积网络(ReGCN)的框架。为解决类不平衡问题,该系统采用了基于焦点损失的关注机制,以加强对代表性不足和具有挑战性的实例的学习。在 DDI Extraction 2013 数据集上进行的评估显示,BioFocal-DDI 的精确度为 86.75%,召回率为 86.53%,F1 分数为 86.64%。这些结果表明,所提出的方法能有效改进 DDI 提取。我们的所有代码和数据都已在 https://github.com/Hero-Legend/BioFocal-DDI 上公开发布。
{"title":"Optimized Drug-Drug Interaction Extraction With BioGPT and Focal Loss-Based Attention.","authors":"Zhu Yuan, Shuailiang Zhang, Huiyun Zhang, Ping Xie, Yaxun Jia","doi":"10.1109/JBHI.2025.3540861","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3540861","url":null,"abstract":"<p><p>Drug-drug interactions (DDIs) are a significant focus in biomedical research and clinical practice due to their potential to compromise treatment outcomes or cause adverse effects. While deep learning approaches have advanced DDI extraction, challenges such as severe class imbalance and the complexity of biomedical relationships persist. This study introduces BioFocal-DDI, a framework combining BioGPT for data augmentation, BioBERT and BiLSTM for contextual and sequential feature extraction, and Relational Graph Convolutional Networks (ReGCN) for relational modeling. To address class imbalance, a Focal Loss-based Attention mechanism is employed to enhance learning on underrepresented and challenging instances. Evaluated on the DDI Extraction 2013 dataset, BioFocal-DDI achieves a precision of 86.75%, recall of 86.53%, and an F1 Score of 86.64%. These results suggest that the proposed method is effective in improving DDI extraction. All our code and data have been publicly released at https://github.com/Hero-Legend/BioFocal-DDI.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-modal Longitudinal Representation Learning for Predicting Neoadjuvant Therapy Response in Breast Cancer Treatment.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-11 DOI: 10.1109/JBHI.2025.3540574
Yuan Gao, Tao Tan, Xin Wang, Regina Beets-Tan, Tianyu Zhang, Luyi Han, Antonio Portaluri, Chunyao Lu, Xinglong Liang, Jonas Teuwen, Hong-Yu Zhou, Ritse Mann

Longitudinal medical imaging is crucial for monitoring neoadjuvant therapy (NAT) response in clinical practice. However, mainstream artificial intelligence (AI) methods for disease monitoring commonly rely on extensive segmentation labels to evaluate lesion progression. While self-supervised vision-language (VL) learning efficiently captures medical knowledge from radiology reports, existing methods focus on single time points, missing opportunities to leverage temporal self-supervision for disease progression tracking. In addition, extracting dynamic progression from longitudinal unannotated images with corresponding textual data poses challenges. In this work, we explicitly account for longitudinal NAT examinations and accompanying reports, encompassing scans before NAT and follow-up scans during mid-/post-NAT. We introduce the multi-modal longitudinal representation learning pipeline (MLRL), a temporal foundation model, that employs multi-scale self-supervision scheme, including single-time scale vision-text alignment (VTA) learning and multi-time scale visual/textual progress (TVP/TTP) learning to extract temporal representations from each modality, thereby facilitates the downstream evaluation of tumor progress. Our method is evaluated against several state-of-the-art self-supervised longitudinal learning and multi-modal VL methods. Results from internal and external datasets demonstrate that our approach not only enhances label efficiency across the zero-, few- and full-shot regime experiments but also significantly improves tumor response prediction in diverse treatment scenarios. Furthermore, MLRL enables interpretable visual tracking of progressive areas in temporal examinations, offering insights into longitudinal VL foundation tools and potentially facilitating the temporal clinical decision-making process.

{"title":"Multi-modal Longitudinal Representation Learning for Predicting Neoadjuvant Therapy Response in Breast Cancer Treatment.","authors":"Yuan Gao, Tao Tan, Xin Wang, Regina Beets-Tan, Tianyu Zhang, Luyi Han, Antonio Portaluri, Chunyao Lu, Xinglong Liang, Jonas Teuwen, Hong-Yu Zhou, Ritse Mann","doi":"10.1109/JBHI.2025.3540574","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3540574","url":null,"abstract":"<p><p>Longitudinal medical imaging is crucial for monitoring neoadjuvant therapy (NAT) response in clinical practice. However, mainstream artificial intelligence (AI) methods for disease monitoring commonly rely on extensive segmentation labels to evaluate lesion progression. While self-supervised vision-language (VL) learning efficiently captures medical knowledge from radiology reports, existing methods focus on single time points, missing opportunities to leverage temporal self-supervision for disease progression tracking. In addition, extracting dynamic progression from longitudinal unannotated images with corresponding textual data poses challenges. In this work, we explicitly account for longitudinal NAT examinations and accompanying reports, encompassing scans before NAT and follow-up scans during mid-/post-NAT. We introduce the multi-modal longitudinal representation learning pipeline (MLRL), a temporal foundation model, that employs multi-scale self-supervision scheme, including single-time scale vision-text alignment (VTA) learning and multi-time scale visual/textual progress (TVP/TTP) learning to extract temporal representations from each modality, thereby facilitates the downstream evaluation of tumor progress. Our method is evaluated against several state-of-the-art self-supervised longitudinal learning and multi-modal VL methods. Results from internal and external datasets demonstrate that our approach not only enhances label efficiency across the zero-, few- and full-shot regime experiments but also significantly improves tumor response prediction in diverse treatment scenarios. Furthermore, MLRL enables interpretable visual tracking of progressive areas in temporal examinations, offering insights into longitudinal VL foundation tools and potentially facilitating the temporal clinical decision-making process.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FoodCoach: Fully Automated Diet Counseling.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-11 DOI: 10.1109/JBHI.2025.3540899
Jing Wu, Simon Mayer, Simeon Pilz, Yasmine S Antille, Jan L Albert, Melanie Stoll, Kimberly Garcia, Klaus Fuchs, Lia Bally, Lukas Eichelberger, Tanja Schneider, Verena Tiefenbeck, Sybilla Merian, Freya Orban

Unhealthy dietary habits are a major preventable risk factor for widespread non-communicable diseases (NCD). Diet counseling is effective in managing diet-related NCDs, but constrained by its manual nature and limited (clinical) resources. To address these challenges, we propose a fully automated diet counseling system FoodCoach. It monitors people's food purchases using digital receipts from loyalty cards and provides structured dietary recommendations. We introduce the FoodCoach system's recommender algorithm and architecture, along with evaluation results from a two-arm randomized controlled trial involving 61 participants. The trial results demonstrate the technical feasibility and potential for scalable, fully automated diet counseling, despite not showing a significant change in participants' food purchase healthiness. We further show how others can deploy and extend the FoodCoach system in their own context and provide all relevant component implementations. Our core research contributions are: 1) a novel dietary recommendation algorithm designed and implemented with clinical experts, and 2) a scalable system architecture that employs a knowledge graph for enhanced interoperability and applicability to diverse domains and data sources. From a practical perspective, FoodCoach can augment traditional diet counseling through automatic diet monitoring and evaluation modules. Additionally, it streamlines the counseling process, conserving clinical resources, and ultimately contributing to the reduction of NCD prevalence.

不健康的饮食习惯是导致广泛的非传染性疾病(NCD)的主要可预防风险因素。饮食咨询对控制与饮食相关的非传染性疾病很有效,但受制于其人工性质和有限的(临床)资源。为了应对这些挑战,我们提出了全自动饮食咨询系统 FoodCoach。该系统利用会员卡的数字收据监控人们的食品购买情况,并提供结构化的饮食建议。我们介绍了 FoodCoach 系统的推荐算法和架构,以及由 61 名参与者参与的双臂随机对照试验的评估结果。试验结果证明了可扩展的全自动饮食咨询的技术可行性和潜力,尽管在参与者购买食品的健康程度方面没有显示出明显的变化。我们进一步展示了其他人如何在自己的环境中部署和扩展 FoodCoach 系统,并提供了所有相关组件的实现方法。我们的核心研究成果包括1)与临床专家共同设计和实施的新型膳食推荐算法;2)采用知识图谱的可扩展系统架构,可增强互操作性并适用于不同领域和数据源。从实用的角度来看,FoodCoach 可以通过自动饮食监测和评估模块来增强传统的饮食咨询。此外,它还能简化咨询过程,节约临床资源,最终有助于降低非传染性疾病的发病率。
{"title":"FoodCoach: Fully Automated Diet Counseling.","authors":"Jing Wu, Simon Mayer, Simeon Pilz, Yasmine S Antille, Jan L Albert, Melanie Stoll, Kimberly Garcia, Klaus Fuchs, Lia Bally, Lukas Eichelberger, Tanja Schneider, Verena Tiefenbeck, Sybilla Merian, Freya Orban","doi":"10.1109/JBHI.2025.3540899","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3540899","url":null,"abstract":"<p><p>Unhealthy dietary habits are a major preventable risk factor for widespread non-communicable diseases (NCD). Diet counseling is effective in managing diet-related NCDs, but constrained by its manual nature and limited (clinical) resources. To address these challenges, we propose a fully automated diet counseling system FoodCoach. It monitors people's food purchases using digital receipts from loyalty cards and provides structured dietary recommendations. We introduce the FoodCoach system's recommender algorithm and architecture, along with evaluation results from a two-arm randomized controlled trial involving 61 participants. The trial results demonstrate the technical feasibility and potential for scalable, fully automated diet counseling, despite not showing a significant change in participants' food purchase healthiness. We further show how others can deploy and extend the FoodCoach system in their own context and provide all relevant component implementations. Our core research contributions are: 1) a novel dietary recommendation algorithm designed and implemented with clinical experts, and 2) a scalable system architecture that employs a knowledge graph for enhanced interoperability and applicability to diverse domains and data sources. From a practical perspective, FoodCoach can augment traditional diet counseling through automatic diet monitoring and evaluation modules. Additionally, it streamlines the counseling process, conserving clinical resources, and ultimately contributing to the reduction of NCD prevalence.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Journal of Biomedical and Health Informatics Publication Information
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-10 DOI: 10.1109/JBHI.2024.3523739
{"title":"IEEE Journal of Biomedical and Health Informatics Publication Information","authors":"","doi":"10.1109/JBHI.2024.3523739","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3523739","url":null,"abstract":"","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"29 2","pages":"C2-C2"},"PeriodicalIF":6.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10879099","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GRATCR: epitope-Specific T Cell receptor Sequence generation With Data-efficient Pre-Trained Models.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-10 DOI: 10.1109/JBHI.2024.3514089
Zhenghong Zhou, Junwei Chen, Shenggeng Lin, Liang Hong, Dong-Qing Wei, Yi Xiong

T cell receptors (TCRs) play a crucial role in numerous immunotherapies targeting tumor cells. However, their acquisition and optimization present significant challenges, involving laborious and time-consuming wet lab experimental resource. Deep generative models have demonstrated remarkable capabilities in functional protein sequence generation, offering a promising solution for enhancing the acquisition of specific TCR sequences. Here, we propose GRATCR, a framework incorporates two pre-trained modules through a novel "grafting" strategy, to de-novo generate TCR sequences targeting specific epitopes. Experimental results demonstrate that TCRs generated by GRATCR exhibit higher specificity toward desired epitopes and are more biologically functional compared with the state-of-the-art model, by using significantly fewer training data. Additionally, the generated sequences display novelty compared to natural sequences, and the interpretability evaluation further confirmed that the model is capable of capturing important binding patterns.

{"title":"GRATCR: epitope-Specific T Cell receptor Sequence generation With Data-efficient Pre-Trained Models.","authors":"Zhenghong Zhou, Junwei Chen, Shenggeng Lin, Liang Hong, Dong-Qing Wei, Yi Xiong","doi":"10.1109/JBHI.2024.3514089","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3514089","url":null,"abstract":"<p><p>T cell receptors (TCRs) play a crucial role in numerous immunotherapies targeting tumor cells. However, their acquisition and optimization present significant challenges, involving laborious and time-consuming wet lab experimental resource. Deep generative models have demonstrated remarkable capabilities in functional protein sequence generation, offering a promising solution for enhancing the acquisition of specific TCR sequences. Here, we propose GRATCR, a framework incorporates two pre-trained modules through a novel \"grafting\" strategy, to de-novo generate TCR sequences targeting specific epitopes. Experimental results demonstrate that TCRs generated by GRATCR exhibit higher specificity toward desired epitopes and are more biologically functional compared with the state-of-the-art model, by using significantly fewer training data. Additionally, the generated sequences display novelty compared to natural sequences, and the interpretability evaluation further confirmed that the model is capable of capturing important binding patterns.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guest Editorial: Deep Medicine and AI for Health
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-10 DOI: 10.1109/JBHI.2024.3523751
Maria Fernanda Cabrera;Tayo Obafemi-Ajayi;Ahmed Metwally;Bobak J Mortazavi
{"title":"Guest Editorial: Deep Medicine and AI for Health","authors":"Maria Fernanda Cabrera;Tayo Obafemi-Ajayi;Ahmed Metwally;Bobak J Mortazavi","doi":"10.1109/JBHI.2024.3523751","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3523751","url":null,"abstract":"","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"29 2","pages":"737-740"},"PeriodicalIF":6.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10879096","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Large Language Models on Biomedical Data Analysis: A Survey. 生物医学数据分析中的大型语言模型:调查。
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-10 DOI: 10.1109/JBHI.2025.3530794
Wei Lan, Zhentao Tang, Mingyang Liu, Qingfeng Chen, Wei Peng, Yiping Phoebe Chen, Yi Pan

With the rapid development of Large Language Model (LLM) technology, it has become an indispensable force in biomedical data analysis research. However, biomedical researchers currently have limited knowledge about LLM. Therefore, there is an urgent need for a summary of LLM applications in biomedical data analysis. Herein, we propose this review by summarizing the latest research work on LLM in biomedicine. In this review, LLM techniques are first outlined. We then discuss biomedical datasets and frameworks for biomedical data analysis, followed by a detailed analysis of LLM applications in genomics, proteomics, transcriptomics, radiomics, single-cell analysis, medical texts and drug discovery. Finally, the challenges of LLM in biomedical data analysis are discussed. In summary, this review is intended for researchers interested in LLM technology and aims to help them understand and apply LLM in biomedical data analysis research.

{"title":"The Large Language Models on Biomedical Data Analysis: A Survey.","authors":"Wei Lan, Zhentao Tang, Mingyang Liu, Qingfeng Chen, Wei Peng, Yiping Phoebe Chen, Yi Pan","doi":"10.1109/JBHI.2025.3530794","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3530794","url":null,"abstract":"<p><p>With the rapid development of Large Language Model (LLM) technology, it has become an indispensable force in biomedical data analysis research. However, biomedical researchers currently have limited knowledge about LLM. Therefore, there is an urgent need for a summary of LLM applications in biomedical data analysis. Herein, we propose this review by summarizing the latest research work on LLM in biomedicine. In this review, LLM techniques are first outlined. We then discuss biomedical datasets and frameworks for biomedical data analysis, followed by a detailed analysis of LLM applications in genomics, proteomics, transcriptomics, radiomics, single-cell analysis, medical texts and drug discovery. Finally, the challenges of LLM in biomedical data analysis are discussed. In summary, this review is intended for researchers interested in LLM technology and aims to help them understand and apply LLM in biomedical data analysis research.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Journal of Biomedical and Health Informatics Information for Authors
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-10 DOI: 10.1109/JBHI.2024.3523743
{"title":"IEEE Journal of Biomedical and Health Informatics Information for Authors","authors":"","doi":"10.1109/JBHI.2024.3523743","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3523743","url":null,"abstract":"","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"29 2","pages":"C3-C3"},"PeriodicalIF":6.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10879095","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Biomedical and Health Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1