首页 > 最新文献

IEEE Journal of Biomedical and Health Informatics最新文献

英文 中文
DRFNet: Enhancing Identity Discriminability and Feature Robustness for Cross-Session VEP-Based EEG Biometrics. DRFNet:增强基于跨会话vep的脑电生物识别的身份可判别性和特征鲁棒性。
IF 6.8 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-03-01 DOI: 10.1109/JBHI.2025.3604620
Honggang Liu, Han Yang, Dongjun Liu, Xuanyu Jin, Yong Peng, Wanzeng Kong

Biometric recognition using visually evoked potentials (VEPs), a type of neural response to visual stimuli recorded via electroencephalography (EEG), has shown great promise. However, the non-stationary nature of EEG signals poses a major challenge in cross-session scenarios, where data collected on different days often leads to performance degradation. To address this, we propose the Discriminative Robust Feature Network (DRFNet) to enhance the robustness and inter-subject discriminability of identity representations across sessions. DRFNet incorporates two key components: (1) A log-power transformation that amplifies inter-individual differences by capturing non-linear energy patterns from VEP features via signal squaring and logarithmic scaling; and (2) A hierarchical normalization strategy with adaptive attention to balance discriminative identity cues with inter-session invariance by stabilizing feature distributions across multiple levels (feature map, batch, and sample). On two public multi-session SSVEP datasets (Dataset A: 30 subjects, 6 s trials; Dataset B: 54 subjects, 4 s trials), our model outperformed state-of-the-art methods, achieving identification accuracies of 92.92% and 86.30%, and equal error rates of 3.92% and 4.09%, respectively. Further analysis demonstrates that filter bank processing and a reduced set of parietal-occipital electrodes can provide more discriminative features while offering a practical path toward system lightweighting.

视觉诱发电位(VEPs)是一种通过脑电图(EEG)记录的对视觉刺激的神经反应,利用视觉诱发电位(VEPs)进行生物识别已经显示出巨大的前景。然而,脑电图信号的非平稳特性在跨会话场景中提出了重大挑战,其中在不同日期收集的数据通常会导致性能下降。为了解决这个问题,我们提出了判别鲁棒特征网络(DRFNet)来增强会话间身份表示的鲁棒性和主体间可判别性。DRFNet包含两个关键组件:(1)对数-幂变换,通过信号平方和对数缩放从VEP特征中捕获非线性能量模式,放大个体间差异;(2)一种自适应的分层归一化策略,通过稳定多个层次(特征映射、批处理和样本)的特征分布来平衡判别性身份线索和会话间不变性。在两个公开的多会话SSVEP数据集(数据集A: 30个受试者,6 s试验;数据集B: 54个受试者,4 s试验)上,我们的模型优于现有的方法,识别准确率分别为92.92%和86.30%,错误率分别为3.92%和4.09%。进一步的分析表明,滤波器组处理和减少的顶枕电极集可以提供更多的鉴别特征,同时为系统轻量化提供了一条实用的途径。代码可从https://github.com/Ultramua/DRFNet.git获得。
{"title":"DRFNet: Enhancing Identity Discriminability and Feature Robustness for Cross-Session VEP-Based EEG Biometrics.","authors":"Honggang Liu, Han Yang, Dongjun Liu, Xuanyu Jin, Yong Peng, Wanzeng Kong","doi":"10.1109/JBHI.2025.3604620","DOIUrl":"10.1109/JBHI.2025.3604620","url":null,"abstract":"<p><p>Biometric recognition using visually evoked potentials (VEPs), a type of neural response to visual stimuli recorded via electroencephalography (EEG), has shown great promise. However, the non-stationary nature of EEG signals poses a major challenge in cross-session scenarios, where data collected on different days often leads to performance degradation. To address this, we propose the Discriminative Robust Feature Network (DRFNet) to enhance the robustness and inter-subject discriminability of identity representations across sessions. DRFNet incorporates two key components: (1) A log-power transformation that amplifies inter-individual differences by capturing non-linear energy patterns from VEP features via signal squaring and logarithmic scaling; and (2) A hierarchical normalization strategy with adaptive attention to balance discriminative identity cues with inter-session invariance by stabilizing feature distributions across multiple levels (feature map, batch, and sample). On two public multi-session SSVEP datasets (Dataset A: 30 subjects, 6 s trials; Dataset B: 54 subjects, 4 s trials), our model outperformed state-of-the-art methods, achieving identification accuracies of 92.92% and 86.30%, and equal error rates of 3.92% and 4.09%, respectively. Further analysis demonstrates that filter bank processing and a reduced set of parietal-occipital electrodes can provide more discriminative features while offering a practical path toward system lightweighting.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"2181-2193"},"PeriodicalIF":6.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144952105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-Based QRS Onset Detection in the Early Ventricular Activation Site ECGs. 基于人工智能的QRS起病检测在早期心室激活部位心电图中的应用。
IF 6.8 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-03-01 DOI: 10.1109/JBHI.2025.3605298
Serhii Reznichenko, John Whitaker, Zixuan Ni, Amir AbdelWahab, Usha Tedrow, John L Sapp, Shijie Zhou

Identifying the onset of the QRS complex is an important step for localizing the site of origin (SOO) of premature ventricular complexes (PVCs) and the exit site of Ventricular Tachycardia (VT). However, identifying the QRS onset is challenging due to signal noise, baseline wander, motion artifact, and muscle artifact. Furthermore, in VT, QRS onset detection is especially difficult due to the overlap with repolarization from the prior beat. In this study, 7706 captured bipolar pacing beats (Stim-QRS < 40 ms) pooled from 384 anatomically widely dispersed pacing sites of 15 patients were used for an attention-based Swin-Unet neural network. We also utilized a self-supervised pretraining technique using 88253 unannotated ECG records. The algorithm correctly identified most of the onsets for ECG signals with bipolar pacing-site ECG dataset, achieving a sensitivity of 0.958 and a 1.924 ± 4.275 milliseconds prediction error. Our algorithm also achieved a prediction error of 1.518 ± 8.702 milliseconds for the QT Database (QTDB), and a prediction error of 1.333 ± 7.575 milliseconds for the Lobachevsky University Electrocardiography Database (LUDB) public datasets. We also achieved high inter-dataset performance, which supports the practical performance of the method, with a sensitivity of 0.927 for QTDB and a sensitivity of 0.981 for LUDB. The AI model achieves accurate onset detection in paced ECGs with spike-removed inputs, providing a controlled, high-fidelity training setting for future efforts in generalizing to VT ECGs. The use of self-supervised pretraining further improves the detector's accuracy, showcasing the applicability of the approach and using unannotated ECG signals for downstream tasks.

确定QRS复合体的起始点是确定室性早搏复合体(室性早搏复合体)起始点和室性心动过速(室性心动过速)结束点的重要步骤。然而,由于信号噪声、基线漂移、运动伪影和肌肉伪影,识别QRS的发病具有挑战性。此外,在VT中,由于与前拍的复极化重叠,QRS的开始检测特别困难。在这项研究中,从解剖学上广泛分布的384个起搏部位收集了15例患者的7706次双极起搏心跳(Stim-QRS < 40ms),用于基于注意的swun - unet神经网络。我们还利用88253个无注释心电图记录进行了自监督预训练技术。该算法正确识别了双极起搏点心电数据集的大部分起搏信号,预测灵敏度为0.958,预测误差为1.924±4.275毫秒。该算法对QT数据库(QTDB)的预测误差为1.518±8.702毫秒,对Lobachevsky University Electrocardiography Database (LUDB)公共数据集的预测误差为1.333±7.575毫秒。我们还实现了高数据集间性能,这支持了该方法的实际性能,QTDB的灵敏度为0.927,LUDB的灵敏度为0.981。人工智能模型在去除尖刺输入的有节奏心电图中实现了准确的发作检测,为将来推广到VT心电图提供了一个可控的、高保真的训练设置。使用自监督预训练进一步提高了检测器的准确性,展示了该方法的适用性,并将未注释的心电信号用于下游任务。
{"title":"AI-Based QRS Onset Detection in the Early Ventricular Activation Site ECGs.","authors":"Serhii Reznichenko, John Whitaker, Zixuan Ni, Amir AbdelWahab, Usha Tedrow, John L Sapp, Shijie Zhou","doi":"10.1109/JBHI.2025.3605298","DOIUrl":"10.1109/JBHI.2025.3605298","url":null,"abstract":"<p><p>Identifying the onset of the QRS complex is an important step for localizing the site of origin (SOO) of premature ventricular complexes (PVCs) and the exit site of Ventricular Tachycardia (VT). However, identifying the QRS onset is challenging due to signal noise, baseline wander, motion artifact, and muscle artifact. Furthermore, in VT, QRS onset detection is especially difficult due to the overlap with repolarization from the prior beat. In this study, 7706 captured bipolar pacing beats (Stim-QRS < 40 ms) pooled from 384 anatomically widely dispersed pacing sites of 15 patients were used for an attention-based Swin-Unet neural network. We also utilized a self-supervised pretraining technique using 88253 unannotated ECG records. The algorithm correctly identified most of the onsets for ECG signals with bipolar pacing-site ECG dataset, achieving a sensitivity of 0.958 and a 1.924 ± 4.275 milliseconds prediction error. Our algorithm also achieved a prediction error of 1.518 ± 8.702 milliseconds for the QT Database (QTDB), and a prediction error of 1.333 ± 7.575 milliseconds for the Lobachevsky University Electrocardiography Database (LUDB) public datasets. We also achieved high inter-dataset performance, which supports the practical performance of the method, with a sensitivity of 0.927 for QTDB and a sensitivity of 0.981 for LUDB. The AI model achieves accurate onset detection in paced ECGs with spike-removed inputs, providing a controlled, high-fidelity training setting for future efforts in generalizing to VT ECGs. The use of self-supervised pretraining further improves the detector's accuracy, showcasing the applicability of the approach and using unannotated ECG signals for downstream tasks.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"2073-2086"},"PeriodicalIF":6.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145006089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Review of Methods for Trustworthy AI in Medical Imaging: The FUTURE-AI Guidelines. 医学成像中可信赖人工智能方法综述:未来人工智能指南。
IF 6.8 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-03-01 DOI: 10.1109/JBHI.2025.3614546
Haridimos Kondylakis, Richard Osuala, Xenia Puig-Bosch, Noussair Lazrak, Oliver Diaz, Kaisar Kushibar, Ioanna Chouvarda, Stefanie Charalambous, Martijn Pa Starmans, Sara Colantonio, Nikos Tachos, Smriti Joshi, Henry C Woodruff, Zohaib Salahuddin, Gianna Tsakou, Susanna Ausso, Leonor Cerda Alberich, Nickolas Papanikolaou, Philippe Lambin, Kostas Marias, Manolis Tsiknakis, Dimitrios I Fotiadis, Luis Marti-Bonmati, Karim Lekadir

Recent advancements in artificial intelligence (AI) and the vast data generated by modern clinical systems have driven the development of AI solutions in medical imaging, encompassing image reconstruction, segmentation, diagnosis, and treatment planning. Despite these successes and potential, many stakeholders worry about the risks and ethical implications of imaging AI, viewing it as complex, opaque, and challenging to understand, use, and trust in critical clinical applications. The FUTURE-AI guideline for trustworthy AI in healthcare was established based on six guiding principles: Fairness, Universality, Traceability, Usability, Robustness, and Explainability. Through international consensus, a set of recommendations was defined, covering the entire lifecycle of medical AI tools, from design, development, and validation to regulation, deployment, and monitoring. In this paper, we describe how these specific recommendations can be instantiated in the domain of medical imaging, providing an overview of current best practices along with guidelines and concrete metrics on how those recommendations could be met, offering a valuable resource to the international medical imaging community.

人工智能(AI)的最新进展和现代临床系统产生的大量数据推动了医学成像领域人工智能解决方案的发展,包括图像重建、分割、诊断和治疗计划。尽管取得了这些成功和潜力,但许多利益相关者担心人工智能成像的风险和伦理影响,认为它复杂、不透明,在关键的临床应用中难以理解、使用和信任。FUTURE-AI指南基于六个指导原则:公平性、通用性、可追溯性、可用性、鲁棒性和可解释性。通过国际共识,确定了一套建议,涵盖医疗人工智能工具的整个生命周期,从设计、开发和验证到监管、部署和监测。在本文中,我们描述了这些具体建议如何在医学成像领域实例化,提供了当前最佳实践的概述,以及如何实现这些建议的指导方针和具体指标,为国际医学成像界提供了宝贵的资源。
{"title":"A Review of Methods for Trustworthy AI in Medical Imaging: The FUTURE-AI Guidelines.","authors":"Haridimos Kondylakis, Richard Osuala, Xenia Puig-Bosch, Noussair Lazrak, Oliver Diaz, Kaisar Kushibar, Ioanna Chouvarda, Stefanie Charalambous, Martijn Pa Starmans, Sara Colantonio, Nikos Tachos, Smriti Joshi, Henry C Woodruff, Zohaib Salahuddin, Gianna Tsakou, Susanna Ausso, Leonor Cerda Alberich, Nickolas Papanikolaou, Philippe Lambin, Kostas Marias, Manolis Tsiknakis, Dimitrios I Fotiadis, Luis Marti-Bonmati, Karim Lekadir","doi":"10.1109/JBHI.2025.3614546","DOIUrl":"10.1109/JBHI.2025.3614546","url":null,"abstract":"<p><p>Recent advancements in artificial intelligence (AI) and the vast data generated by modern clinical systems have driven the development of AI solutions in medical imaging, encompassing image reconstruction, segmentation, diagnosis, and treatment planning. Despite these successes and potential, many stakeholders worry about the risks and ethical implications of imaging AI, viewing it as complex, opaque, and challenging to understand, use, and trust in critical clinical applications. The FUTURE-AI guideline for trustworthy AI in healthcare was established based on six guiding principles: Fairness, Universality, Traceability, Usability, Robustness, and Explainability. Through international consensus, a set of recommendations was defined, covering the entire lifecycle of medical AI tools, from design, development, and validation to regulation, deployment, and monitoring. In this paper, we describe how these specific recommendations can be instantiated in the domain of medical imaging, providing an overview of current best practices along with guidelines and concrete metrics on how those recommendations could be met, offering a valuable resource to the international medical imaging community.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"2299-2315"},"PeriodicalIF":6.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145191511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SIBW: A Swarm Intelligence-Based Network Flow Watermarking Approach for Privacy Leakage Detection in Digital Healthcare Systems. SIBW:一种基于群智能的网络流水印方法用于数字医疗系统中的隐私泄漏检测。
IF 6.8 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-03-01 DOI: 10.1109/JBHI.2025.3542561
Sibo Qiao, Qiang Guo, Fengdong Shi, Min Wang, Haohao Zhu, Fazlullah Khan, Joel J P C Rodrigues, Zhihan Lyu

The exponential growth of sensitive patient information and diagnostic records in digital healthcare systems has increased the complexity of data protection, while frequent medical data breaches severely compromise system security and reliability. Existing privacy protection techniques often lack robustness and real-time capabilities in high-noise, high-packet-loss, and dynamic network environments, limiting their effectiveness in detecting healthcare data leaks. To address these challenges, we propose a Swarm Intelligence-Based Network Watermarking (SIBW) method for real-time privacy data leakage detection in digital healthcare systems. SIBW integrates fountain codes with outer error correction codes and employs a Multi-Phase Synergistic Swarm Optimization Algorithm (MPSSOA) to dynamically optimize encoding parameters, significantly enhancing the robustness and interference resistance of watermark detection. Additionally, a reliable synchronization sequence and lightweight embedding mechanism are designed to ensure adaptability to complex, dynamic networks. Experimental results demonstrate that SIBW achieves over 90% detection accuracy under high latency jitter and packet loss conditions, surpassing existing methods in both robustness and efficiency. With a compact design of only 3.7 MB, SIBW is particularly suited for rapid deployment in resource-constrained digital healthcare systems.

数字医疗保健系统中敏感患者信息和诊断记录的指数级增长增加了数据保护的复杂性,而频繁的医疗数据泄露严重损害了系统的安全性和可靠性。现有的隐私保护技术在高噪声、高丢包和动态网络环境中往往缺乏鲁棒性和实时性,限制了它们检测医疗保健数据泄漏的有效性。为了解决这些挑战,我们提出了一种基于群体智能的网络水印(SIBW)方法,用于数字医疗系统中的实时隐私数据泄漏检测。SIBW将喷泉码与外部纠错码集成在一起,采用MPSSOA (Multi-Phase Synergistic Swarm Optimization Algorithm)算法对编码参数进行动态优化,显著提高了水印检测的鲁棒性和抗干扰性。此外,设计了可靠的同步序列和轻量级嵌入机制,以确保对复杂动态网络的适应性。实验结果表明,在高延迟抖动和丢包情况下,SIBW的检测准确率达到90%以上,在鲁棒性和效率上均优于现有方法。凭借仅3.7 MB的紧凑设计,SIBW特别适合在资源受限的数字医疗保健系统中快速部署。
{"title":"SIBW: A Swarm Intelligence-Based Network Flow Watermarking Approach for Privacy Leakage Detection in Digital Healthcare Systems.","authors":"Sibo Qiao, Qiang Guo, Fengdong Shi, Min Wang, Haohao Zhu, Fazlullah Khan, Joel J P C Rodrigues, Zhihan Lyu","doi":"10.1109/JBHI.2025.3542561","DOIUrl":"10.1109/JBHI.2025.3542561","url":null,"abstract":"<p><p>The exponential growth of sensitive patient information and diagnostic records in digital healthcare systems has increased the complexity of data protection, while frequent medical data breaches severely compromise system security and reliability. Existing privacy protection techniques often lack robustness and real-time capabilities in high-noise, high-packet-loss, and dynamic network environments, limiting their effectiveness in detecting healthcare data leaks. To address these challenges, we propose a Swarm Intelligence-Based Network Watermarking (SIBW) method for real-time privacy data leakage detection in digital healthcare systems. SIBW integrates fountain codes with outer error correction codes and employs a Multi-Phase Synergistic Swarm Optimization Algorithm (MPSSOA) to dynamically optimize encoding parameters, significantly enhancing the robustness and interference resistance of watermark detection. Additionally, a reliable synchronization sequence and lightweight embedding mechanism are designed to ensure adaptability to complex, dynamic networks. Experimental results demonstrate that SIBW achieves over 90% detection accuracy under high latency jitter and packet loss conditions, surpassing existing methods in both robustness and efficiency. With a compact design of only 3.7 MB, SIBW is particularly suited for rapid deployment in resource-constrained digital healthcare systems.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"1912-1924"},"PeriodicalIF":6.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143556727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Value Decomposition-Based Multi-Agent Learning for Anesthetics Collaborative Control. 基于价值分解的麻醉药协同控制多智能体学习。
IF 6.8 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-03-01 DOI: 10.1109/JBHI.2025.3599210
Huijie Li, Yide Yu, Si Shi, Anmin Hu, Jian Huo, Wei Lin, Chaoran Wu, Wuman Luo

Automated control of personalized multiple anesthetics in clinical Total Intravenous Anesthesia (TIVA) is crucial yet challenging. Current systems, including target-controlled infusion (TCI) and closed-loop systems, either rely on relatively static pharmacokinetic/pharmacodynamic (PK/PD) models or focus on single anesthetic control. So they limit both personalization and collaborative control. To address these issues, we propose a novel Value Decomposition Multi-Agent Deep Reinforcement Learning (VD-MADRL) framework based on Markov Game (MG) for Personalized Multiple Anesthetics Control in a Closed-Loop system (PMAC-CL). VD-MADRL optimizes the collaboration between two anesthetics propofol (Agent I) and remifentanil (Agent II) by leveraging a MG to identify optimal actions among heterogeneous agents. We employ various value function decomposition methods to resolve the credit allocation problem and enhance collaborative control. We also introduce a multivariate environment model based on random forest (RF) for anesthesia state simulation. To ensure data validity, we design a data resampling and alignment technique to synchronize trajectory data from different devices, avoiding gradient explosion and maintaining conformity to Markov property. Extensive experiments on general and thoracic surgery datasets demonstrate that VD-MADRL provides more refined dose adjustments and maintains multiple anesthesia state indicators more stably at target levels compared to human experience. Especially, the best-performing algorithm, VDN in general surgery with online training, achieved a 16.4% increase in cumulative reward (CR) and a 58.0% reduction in mean MDPE compared to human experience. This demonstrates its great clinical value.

在临床全静脉麻醉(TIVA)中,个性化多种麻醉药的自动控制至关重要,但也具有挑战性。目前的系统,包括靶控输注(TCI)和闭环系统,要么依赖于相对静态的药代动力学/药效学(PK/PD)模型,要么侧重于单一麻醉剂的控制。所以它们限制了个性化和协作控制。为了解决这些问题,我们提出了一种新的基于马尔可夫博弈(MG)的价值分解多智能体深度强化学习(VD-MADRL)框架,用于闭环系统(PMAC-CL)中的个性化多麻醉剂控制。VD-MADRL优化了两种麻醉剂丙泊酚(药剂I)和瑞芬太尼(药剂II)之间的协作,利用MG来识别异质药物之间的最佳作用。采用多种价值函数分解方法解决信用分配问题,加强协同控制。我们还介绍了一种基于随机森林(RF)的多变量环境模型用于麻醉状态模拟。为了保证数据的有效性,我们设计了一种数据重采样和对准技术,以同步不同设备的轨迹数据,避免梯度爆炸,并保持符合马尔可夫性质。在普外科和胸外科数据集上进行的大量实验表明,与人类经验相比,VD-MADRL提供了更精细的剂量调整,并将多种麻醉状态指标更稳定地维持在目标水平。特别是,与人类经验相比,表现最好的算法VDN在普通外科在线培训中实现了16.4%的累积奖励(CR)增加和58.0%的平均MDPE减少。可见其临床应用价值。
{"title":"Value Decomposition-Based Multi-Agent Learning for Anesthetics Collaborative Control.","authors":"Huijie Li, Yide Yu, Si Shi, Anmin Hu, Jian Huo, Wei Lin, Chaoran Wu, Wuman Luo","doi":"10.1109/JBHI.2025.3599210","DOIUrl":"10.1109/JBHI.2025.3599210","url":null,"abstract":"<p><p>Automated control of personalized multiple anesthetics in clinical Total Intravenous Anesthesia (TIVA) is crucial yet challenging. Current systems, including target-controlled infusion (TCI) and closed-loop systems, either rely on relatively static pharmacokinetic/pharmacodynamic (PK/PD) models or focus on single anesthetic control. So they limit both personalization and collaborative control. To address these issues, we propose a novel Value Decomposition Multi-Agent Deep Reinforcement Learning (VD-MADRL) framework based on Markov Game (MG) for Personalized Multiple Anesthetics Control in a Closed-Loop system (PMAC-CL). VD-MADRL optimizes the collaboration between two anesthetics propofol (Agent I) and remifentanil (Agent II) by leveraging a MG to identify optimal actions among heterogeneous agents. We employ various value function decomposition methods to resolve the credit allocation problem and enhance collaborative control. We also introduce a multivariate environment model based on random forest (RF) for anesthesia state simulation. To ensure data validity, we design a data resampling and alignment technique to synchronize trajectory data from different devices, avoiding gradient explosion and maintaining conformity to Markov property. Extensive experiments on general and thoracic surgery datasets demonstrate that VD-MADRL provides more refined dose adjustments and maintains multiple anesthesia state indicators more stably at target levels compared to human experience. Especially, the best-performing algorithm, VDN in general surgery with online training, achieved a 16.4% increase in cumulative reward (CR) and a 58.0% reduction in mean MDPE compared to human experience. This demonstrates its great clinical value.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"2167-2180"},"PeriodicalIF":6.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144882809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MRLF-DDI: A Multi-View Representation Learning Framework for Drug-Drug Interaction Event Prediction. MRLF-DDI:药物-药物相互作用事件预测的多视图表示学习框架。
IF 6.8 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-03-01 DOI: 10.1109/JBHI.2025.3592643
Jian Zhong, Haochen Zhao, Xiao Liang, Qichang Zhao, Jianxin Wang

Accurately predicting drug-drug interaction events (DDIEs) is critical for improving medication safety and guiding clinical decision-making. However, existing graph neural network (GNN)-based methods often struggle to effectively integrate multi-view features and generalize to novel or understudied drugs. To address these limitations, we propose MRLF-DDI, a multi-view representation learning framework that jointly models information from individual drug features, local interaction contexts, and global interaction patterns. MRLF-DDI introduces the use of atom-level structural features enriched with bond angle information-marking the first incorporation of this geometry-aware feature in DDIE prediction. It further employs a multi-granularity GNN and a gated knowledge transfer strategy to enhance feature learning and cold-start generalization. Extensive experiments on benchmark datasets demonstrate that MRLF-DDI achieves superior performance in both warm-start and cold-start scenarios. Case studies and visualization analyses further highlight its practical utility in identifying clinically relevant DDIEs.

准确预测药物相互作用事件对提高用药安全性和指导临床决策至关重要。然而,现有的基于图神经网络(GNN)的方法往往难以有效地整合多视图特征并推广到新型或未充分研究的药物。为了解决这些限制,我们提出了MRLF-DDI,这是一个多视图表示学习框架,可以联合建模来自单个药物特征、局部相互作用背景和全局相互作用模式的信息。MRLF-DDI引入了富含键角信息的原子级结构特征的使用-标志着这种几何感知特征首次纳入DDIE预测。进一步采用多粒度GNN和门控知识转移策略来增强特征学习和冷启动泛化。在基准数据集上的大量实验表明,MRLF-DDI在热启动和冷启动场景下都具有优异的性能。案例研究和可视化分析进一步强调了其在确定临床相关ddie方面的实用价值。MRLFDDI的代码可从https://github.com/jianzhong123/MRLFDDI获得。
{"title":"MRLF-DDI: A Multi-View Representation Learning Framework for Drug-Drug Interaction Event Prediction.","authors":"Jian Zhong, Haochen Zhao, Xiao Liang, Qichang Zhao, Jianxin Wang","doi":"10.1109/JBHI.2025.3592643","DOIUrl":"10.1109/JBHI.2025.3592643","url":null,"abstract":"<p><p>Accurately predicting drug-drug interaction events (DDIEs) is critical for improving medication safety and guiding clinical decision-making. However, existing graph neural network (GNN)-based methods often struggle to effectively integrate multi-view features and generalize to novel or understudied drugs. To address these limitations, we propose MRLF-DDI, a multi-view representation learning framework that jointly models information from individual drug features, local interaction contexts, and global interaction patterns. MRLF-DDI introduces the use of atom-level structural features enriched with bond angle information-marking the first incorporation of this geometry-aware feature in DDIE prediction. It further employs a multi-granularity GNN and a gated knowledge transfer strategy to enhance feature learning and cold-start generalization. Extensive experiments on benchmark datasets demonstrate that MRLF-DDI achieves superior performance in both warm-start and cold-start scenarios. Case studies and visualization analyses further highlight its practical utility in identifying clinically relevant DDIEs.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"2217-2227"},"PeriodicalIF":6.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144707316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diagnosis of Major Depressive Disorder Based on Multi-Granularity Brain Networks Fusion. 基于多粒度脑网络融合的重度抑郁症诊断。
IF 6.8 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-03-01 DOI: 10.1109/JBHI.2025.3593617
Mengni Zhou, Rongkun Mi, Ang Zhao, Xin Wen, Yan Niu, Xubin Wu, Yanqing Dong, Yaru Xu, Yanan Li, Jie Xiang

Major Depressive Disorder (MDD) is a common mental disorder, and making an early and accurate diagnosis is crucial for effective treatment. Functional Connectivity Network (FCN) constructed based on functional Magnetic Resonance Imaging (fMRI) have demonstrated the potential to reveal the mechanisms underlying brain abnormalities. Deep learning has been widely employed to extract features from FCN, but existing methods typically operate directly on the network, failing to fully exploit their deep information. Although graph coarsening techniques offer certain advantages in extracting the brain's complex structure, they may also result in the loss of critical information. To address this issue, we propose the Multi-Granularity Brain Networks Fusion (MGBNF) framework. MGBNF models brain networks through multi-granularity analysis and constructs combinatorial modules to enhance feature extraction. Finally, the Constrained Attention Pooling (CAP) mechanism is employed to achieve the effective integration of multi-channel features. In the feature extraction stage, the parameter sharing mechanism is introduced and applied to multiple channels to capture similar connectivity patterns between different channels while reducing the number of parameters. We validate the effectiveness of the MGBNF model on multiple classification tasks and various brain atlases. The results demonstrate that MGBNF outperforms baseline models in terms of classification performance. Ablation experiments further validate its effectiveness. In addition, we conducted a thorough analysis of the variability of different subtypes of MDD by multiple classification tasks, and the results support further clinical applications.

重度抑郁症(MDD)是一种常见的精神障碍,早期准确诊断对有效治疗至关重要。基于功能磁共振成像(fMRI)构建的功能连接网络(FCN)已经显示出揭示大脑异常机制的潜力。深度学习已被广泛用于从FCN中提取特征,但现有方法通常直接在网络上操作,未能充分利用其深层信息。尽管图形粗化技术在提取大脑复杂结构方面具有一定的优势,但它们也可能导致关键信息的丢失。为了解决这个问题,我们提出了多粒度脑网络融合(MGBNF)框架。MGBNF通过多粒度分析对脑网络进行建模,构建组合模块增强特征提取。最后,采用约束注意力池(Constrained Attention Pooling, CAP)机制实现多通道特征的有效整合。在特征提取阶段,引入参数共享机制,并将其应用于多个通道,在减少参数数量的同时捕获不同通道之间相似的连接模式。我们在多个分类任务和各种脑图谱上验证了MGBNF模型的有效性。结果表明,MGBNF在分类性能方面优于基线模型。烧蚀实验进一步验证了其有效性。此外,我们通过多个分类任务对MDD不同亚型的变异性进行了深入的分析,结果支持进一步的临床应用。
{"title":"Diagnosis of Major Depressive Disorder Based on Multi-Granularity Brain Networks Fusion.","authors":"Mengni Zhou, Rongkun Mi, Ang Zhao, Xin Wen, Yan Niu, Xubin Wu, Yanqing Dong, Yaru Xu, Yanan Li, Jie Xiang","doi":"10.1109/JBHI.2025.3593617","DOIUrl":"10.1109/JBHI.2025.3593617","url":null,"abstract":"<p><p>Major Depressive Disorder (MDD) is a common mental disorder, and making an early and accurate diagnosis is crucial for effective treatment. Functional Connectivity Network (FCN) constructed based on functional Magnetic Resonance Imaging (fMRI) have demonstrated the potential to reveal the mechanisms underlying brain abnormalities. Deep learning has been widely employed to extract features from FCN, but existing methods typically operate directly on the network, failing to fully exploit their deep information. Although graph coarsening techniques offer certain advantages in extracting the brain's complex structure, they may also result in the loss of critical information. To address this issue, we propose the Multi-Granularity Brain Networks Fusion (MGBNF) framework. MGBNF models brain networks through multi-granularity analysis and constructs combinatorial modules to enhance feature extraction. Finally, the Constrained Attention Pooling (CAP) mechanism is employed to achieve the effective integration of multi-channel features. In the feature extraction stage, the parameter sharing mechanism is introduced and applied to multiple channels to capture similar connectivity patterns between different channels while reducing the number of parameters. We validate the effectiveness of the MGBNF model on multiple classification tasks and various brain atlases. The results demonstrate that MGBNF outperforms baseline models in terms of classification performance. Ablation experiments further validate its effectiveness. In addition, we conducted a thorough analysis of the variability of different subtypes of MDD by multiple classification tasks, and the results support further clinical applications.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"2328-2339"},"PeriodicalIF":6.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144742034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized XGBoost for Multimodal Affective State Classification Using In-Ear PPG and Behind-the-Ear EEG Signals. 基于耳内PPG和耳后EEG信号的优化XGBoost多模态情感状态分类
IF 6.8 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-03-01 DOI: 10.1109/JBHI.2025.3598354
Hika Barki, Ngoc-Dau Mai, Wan-Young Chung

Automated emotion identification via physiological data from wearable devices is a growing field, yet traditional electroencephalography (EEG) and photoplethysmography (PPG) collection methods can be uncomfortable. This research introduces a novel structure of the in-ear wearable device that captures both PPG and EEG signals to enhance user comfort for emotion recognition. Data were collected from 21 individuals experiencing four emotional states (fear, happy, calm, sad) induced by video stimuli. Following signal preprocessing, temporal and frequency domain features were extracted and selected using the ReliefF approach. Classification accuracy was assessed for PPG, EEG, and combined features, with combined features yielding superior results. An XGBoost classifier, optimized with Bayesian hyperparameter tuning, achieved 97.58% accuracy, 97.57% precision, 97.57% recall, and a 97.58% F1 score, outperforming support vector machine, decision tree, random forest, and K-Nearest Neighbor classifiers. These findings highlight the benefits of multimodal physiological sensing and optimized machine learning for reliable emotion characterization, with implications for mental health monitoring and human-computer interaction.

通过可穿戴设备的生理数据自动识别情绪是一个不断发展的领域,但传统的脑电图(EEG)和光体积脉搏波(PPG)收集方法可能会让人不舒服。本研究介绍了一种新型结构的入耳式可穿戴设备,该设备可以捕获PPG和EEG信号,以增强用户对情绪识别的舒适度。数据收集自21名经历四种情绪状态(恐惧、快乐、平静、悲伤)的人,这些情绪状态是由视频刺激引起的。在对信号进行预处理后,利用ReliefF方法提取和选择时域和频域特征。评估了PPG、EEG和组合特征的分类准确性,组合特征产生了更好的结果。使用贝叶斯超参数调优优化的XGBoost分类器实现了97.58%的准确率、97.57%的精度、97.57%的召回率和97.58%的F1分数,优于支持向量机、决策树、随机森林和k近邻分类器。这些发现强调了多模态生理传感和优化的机器学习对可靠的情绪表征的好处,对心理健康监测和人机交互具有重要意义。
{"title":"Optimized XGBoost for Multimodal Affective State Classification Using In-Ear PPG and Behind-the-Ear EEG Signals.","authors":"Hika Barki, Ngoc-Dau Mai, Wan-Young Chung","doi":"10.1109/JBHI.2025.3598354","DOIUrl":"10.1109/JBHI.2025.3598354","url":null,"abstract":"<p><p>Automated emotion identification via physiological data from wearable devices is a growing field, yet traditional electroencephalography (EEG) and photoplethysmography (PPG) collection methods can be uncomfortable. This research introduces a novel structure of the in-ear wearable device that captures both PPG and EEG signals to enhance user comfort for emotion recognition. Data were collected from 21 individuals experiencing four emotional states (fear, happy, calm, sad) induced by video stimuli. Following signal preprocessing, temporal and frequency domain features were extracted and selected using the ReliefF approach. Classification accuracy was assessed for PPG, EEG, and combined features, with combined features yielding superior results. An XGBoost classifier, optimized with Bayesian hyperparameter tuning, achieved 97.58% accuracy, 97.57% precision, 97.57% recall, and a 97.58% F1 score, outperforming support vector machine, decision tree, random forest, and K-Nearest Neighbor classifiers. These findings highlight the benefits of multimodal physiological sensing and optimized machine learning for reliable emotion characterization, with implications for mental health monitoring and human-computer interaction.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"2139-2152"},"PeriodicalIF":6.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144845820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-Direct Contact ECG Signal Classification Using a Hybrid Deep Learning Framework With Validation in Bedside Heart Rate Variability Analysis. 基于混合深度学习框架的非直接接触心电信号分类与床边心率变异性分析验证。
IF 6.8 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-03-01 DOI: 10.1109/JBHI.2025.3601807
Zhijun Xiao, Maarten De Vos, Christos Chatzichristos, Yunyi Jiang, Minghui Zhao, Fei Ding, Chenxi Yang, Jianqing Li, Chengyu Liu

In recent years, the demand for smart healthcare solutions have heightened the need for accuracy, reliability, and comfort in bedside ECG recording and analysis. This study presents a bedside non-direct contact ECG recording system based on capacitive coupling electrocardiography (cECG) and verifies its performance in accurately capturing Heart Rate Variability (HRV) during the night. Firstly, cECG collects ECG data through clothing, avoiding skin irritation from conventional wet electrodes. Secondly, leveraging the unique characteristics of cECG signals, a deep learning framework assesses the quality of cECG, filtering noise and identifying off-bed information, enhancing HRV analysis precision. Subsequently, the system was employed to recording sleep data from 6 subjects overnight, with our proposed algorithm utilized for signal quality assessment (SQA) and HRV analysis. Finally, HRV features were compared with synchronously collected wet electrode ECG signals, encompassing time domain features, frequency domain features, and nonlinear features, totaling 13 HRV features. Experimental findings demonstrate that for the SQA task, the model achieved a classification accuracy of 94.7%, with a Recall of 0.941, Precision of 0.940, F1 score of 0.941, and Cohen's Kappa of 0.927. The accuracy of on/off-bed monitoring reached 99.79%. Additionally, HRV features showed a strong correlation with the reference ECG. In the time-domain metrics, the largest mean absolute percentage error (MAPE) is for PNN50, with a value of 8.148%. In the frequency-domain features, the largest MAPE is for HF, with a value of 13.253%. For nonlinear features, the largest MAPE is for SD1, with a value of 5.182%. Generally, the system exhibited a reliable solution for cECG recording, on/off-bed status detection, and bedside HRV analysis.

近年来,对智能医疗解决方案的需求提高了对床边心电图记录和分析的准确性、可靠性和舒适性的需求。提出了一种基于电容耦合心电图(cECG)的床边非直接接触心电图记录系统,并验证了其在夜间准确捕获心率变异性(HRV)的性能。首先,cECG通过衣服收集心电图数据,避免了传统湿电极对皮肤的刺激。其次,利用cECG信号的独特特性,利用深度学习框架评估cECG的质量,过滤噪声并识别床外信息,提高HRV分析精度。随后,使用该系统记录6名受试者的夜间睡眠数据,并利用我们提出的算法进行信号质量评估(SQA)和HRV分析。最后,将HRV特征与同步采集的湿电极心电信号进行比较,包括时域特征、频域特征和非线性特征,共13个HRV特征。实验结果表明,对于SQA任务,该模型的分类准确率为94.7%,Recall为0.941,Precision为0.940,F1分数为0.941,Cohen’s Kappa为0.927。床上/床外监测准确率达99.79%。此外,HRV特征与参考心电图有很强的相关性。在时域指标中,PNN50的平均绝对百分比误差(MAPE)最大,为8.148%。在频域特征中,高频的MAPE最大,为13.253%。对于非线性特征,最大的MAPE是SD1,其值为5.182%。总的来说,该系统在cECG记录、床上/床下状态检测和床边HRV分析方面表现出可靠的解决方案。
{"title":"Non-Direct Contact ECG Signal Classification Using a Hybrid Deep Learning Framework With Validation in Bedside Heart Rate Variability Analysis.","authors":"Zhijun Xiao, Maarten De Vos, Christos Chatzichristos, Yunyi Jiang, Minghui Zhao, Fei Ding, Chenxi Yang, Jianqing Li, Chengyu Liu","doi":"10.1109/JBHI.2025.3601807","DOIUrl":"10.1109/JBHI.2025.3601807","url":null,"abstract":"<p><p>In recent years, the demand for smart healthcare solutions have heightened the need for accuracy, reliability, and comfort in bedside ECG recording and analysis. This study presents a bedside non-direct contact ECG recording system based on capacitive coupling electrocardiography (cECG) and verifies its performance in accurately capturing Heart Rate Variability (HRV) during the night. Firstly, cECG collects ECG data through clothing, avoiding skin irritation from conventional wet electrodes. Secondly, leveraging the unique characteristics of cECG signals, a deep learning framework assesses the quality of cECG, filtering noise and identifying off-bed information, enhancing HRV analysis precision. Subsequently, the system was employed to recording sleep data from 6 subjects overnight, with our proposed algorithm utilized for signal quality assessment (SQA) and HRV analysis. Finally, HRV features were compared with synchronously collected wet electrode ECG signals, encompassing time domain features, frequency domain features, and nonlinear features, totaling 13 HRV features. Experimental findings demonstrate that for the SQA task, the model achieved a classification accuracy of 94.7%, with a Recall of 0.941, Precision of 0.940, F1 score of 0.941, and Cohen's Kappa of 0.927. The accuracy of on/off-bed monitoring reached 99.79%. Additionally, HRV features showed a strong correlation with the reference ECG. In the time-domain metrics, the largest mean absolute percentage error (MAPE) is for PNN50, with a value of 8.148%. In the frequency-domain features, the largest MAPE is for HF, with a value of 13.253%. For nonlinear features, the largest MAPE is for SD1, with a value of 5.182%. Generally, the system exhibited a reliable solution for cECG recording, on/off-bed status detection, and bedside HRV analysis.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"1959-1971"},"PeriodicalIF":6.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144952225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
4PM: Privacy-Preserving Patient-Provider Matching Service in Digital Healthcare System. 下午4点:数字医疗系统中保护隐私的患者-提供者匹配服务。
IF 6.8 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-03-01 DOI: 10.1109/JBHI.2025.3644174
Jing Lei, Haobo Zhang, Fake Lyu, Jinghui Qin, Qingqi Pei

For digital health platforms, the challenge is balancing patient privacy with the ability to match patients to the right providers quickly and accurately. Existing systems often suffer from privacy leakage, insufficient matching precision, and degraded performance when dealing with large-scale data. In this paper, we propose 4PM, a novel privacy-preserving patient-provider matching scheme that leverages secure computation to deliver strong privacy guarantees while ensuring efficient and accurate matching. Our method partitions patient data between two non-colluding servers via secret sharing, employing the optimized Millionaires' Protocol for secure ranking and leveraging oblivious retrieval techniques for privacy-preserving matching. 4PM significantly reduces the computational complexity of high-dimensional data, achieving end-to-end latency within 0.5 seconds in scenarios with 200 doctors and 200-dimensional symptom vectors. Our work contributes to fostering secure and trustworthy healthcare in the digital era.

对于数字健康平台来说,挑战在于平衡患者隐私与快速准确地将患者匹配到合适的提供者的能力。现有系统在处理大规模数据时往往存在隐私泄露、匹配精度不足、性能下降等问题。在本文中,我们提出了一种新的保护隐私的患者-提供者匹配方案4PM,该方案利用安全计算提供强大的隐私保证,同时确保有效和准确的匹配。我们的方法通过秘密共享在两台非串通服务器之间划分患者数据,采用优化的百万富翁协议进行安全排名,并利用遗忘检索技术进行隐私保护匹配。4PM显著降低了高维数据的计算复杂性,在200名医生和200维症状向量的场景中实现了0.5秒内的端到端延迟。我们的工作有助于在数字时代促进安全可靠的医疗保健。
{"title":"4PM: Privacy-Preserving Patient-Provider Matching Service in Digital Healthcare System.","authors":"Jing Lei, Haobo Zhang, Fake Lyu, Jinghui Qin, Qingqi Pei","doi":"10.1109/JBHI.2025.3644174","DOIUrl":"10.1109/JBHI.2025.3644174","url":null,"abstract":"<p><p>For digital health platforms, the challenge is balancing patient privacy with the ability to match patients to the right providers quickly and accurately. Existing systems often suffer from privacy leakage, insufficient matching precision, and degraded performance when dealing with large-scale data. In this paper, we propose 4PM, a novel privacy-preserving patient-provider matching scheme that leverages secure computation to deliver strong privacy guarantees while ensuring efficient and accurate matching. Our method partitions patient data between two non-colluding servers via secret sharing, employing the optimized Millionaires' Protocol for secure ranking and leveraging oblivious retrieval techniques for privacy-preserving matching. 4PM significantly reduces the computational complexity of high-dimensional data, achieving end-to-end latency within 0.5 seconds in scenarios with 200 doctors and 200-dimensional symptom vectors. Our work contributes to fostering secure and trustworthy healthcare in the digital era.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"1947-1958"},"PeriodicalIF":6.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145762784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Biomedical and Health Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1