Pub Date : 2026-02-01DOI: 10.1109/JBHI.2025.3597054
Muhammad Salman Haleem, Vasilis Aidonis, Eleni I Georga, Maria Krini, Maria Matsangidou, Angelos P Kassianos, Constantinos S Pattichis, Miguel Rujas, Laura Lopez-Perez, Giuseppe Fico, Leandro Pecchia, Dimitrios I Fotiadis, Gatekeeper Consortium
Monitoring of advanced cancer patients' health, treatment, and supportive care is essential for improving cancer survival outcomes. Traditionally, oncology has relied on clinical metrics such as survival rates, time to disease progression, and clinician-assessed toxicities. In recent years, patient-reported outcome measures (PROMs) have provided a complementary perspective, offering insights into patients' health-related quality of life (HRQoL). However, collecting PROMs consistently requires frequent clinical assessments, creating important logistical challenges. Wearable devices combined with artificial intelligence (AI) present an innovative solution for continuous, real-time HRQoL monitoring. While deep learning models effectively capture temporal patterns in physiological data, most existing approaches are unimodal, limiting their ability to address patient heterogeneity and complexity. This study introduces a multimodal deep learning approach to estimate HRQoL in advanced cancer patients. Physiological data, such as heart rate and sleep quality collected via wearable devices, are analyzed using a hybrid model combining convolutional neural networks (CNNs) and bidirectional long short-term memory (BiLSTM) networks with an attention mechanism. The BiLSTM extracts temporal dynamics, while the attention mechanism highlights key features, and CNNs detect localized patterns. PROMs, including the Hospital Anxiety and Depression Scale (HADS) and the Integrated Palliative Care Outcome Scale (IPOS), are processed through a parallel neural network before being integrated into the physiological data pipeline. The proposed model was validated with data from 204 patients over 42 days, achieving a mean absolute percentage error (MAPE) of 0.24 in HRQoL prediction. These results demonstrate the potential of combining wearable data and PROMs to improve advanced cancer care.
{"title":"A Multimodal Deep Learning Architecture for Estimating Quality of Life for Advanced Cancer Patients Based on Wearable Devices and Patient-Reported Outcome Measures.","authors":"Muhammad Salman Haleem, Vasilis Aidonis, Eleni I Georga, Maria Krini, Maria Matsangidou, Angelos P Kassianos, Constantinos S Pattichis, Miguel Rujas, Laura Lopez-Perez, Giuseppe Fico, Leandro Pecchia, Dimitrios I Fotiadis, Gatekeeper Consortium","doi":"10.1109/JBHI.2025.3597054","DOIUrl":"10.1109/JBHI.2025.3597054","url":null,"abstract":"<p><p>Monitoring of advanced cancer patients' health, treatment, and supportive care is essential for improving cancer survival outcomes. Traditionally, oncology has relied on clinical metrics such as survival rates, time to disease progression, and clinician-assessed toxicities. In recent years, patient-reported outcome measures (PROMs) have provided a complementary perspective, offering insights into patients' health-related quality of life (HRQoL). However, collecting PROMs consistently requires frequent clinical assessments, creating important logistical challenges. Wearable devices combined with artificial intelligence (AI) present an innovative solution for continuous, real-time HRQoL monitoring. While deep learning models effectively capture temporal patterns in physiological data, most existing approaches are unimodal, limiting their ability to address patient heterogeneity and complexity. This study introduces a multimodal deep learning approach to estimate HRQoL in advanced cancer patients. Physiological data, such as heart rate and sleep quality collected via wearable devices, are analyzed using a hybrid model combining convolutional neural networks (CNNs) and bidirectional long short-term memory (BiLSTM) networks with an attention mechanism. The BiLSTM extracts temporal dynamics, while the attention mechanism highlights key features, and CNNs detect localized patterns. PROMs, including the Hospital Anxiety and Depression Scale (HADS) and the Integrated Palliative Care Outcome Scale (IPOS), are processed through a parallel neural network before being integrated into the physiological data pipeline. The proposed model was validated with data from 204 patients over 42 days, achieving a mean absolute percentage error (MAPE) of 0.24 in HRQoL prediction. These results demonstrate the potential of combining wearable data and PROMs to improve advanced cancer care.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"1166-1177"},"PeriodicalIF":6.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144821295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1109/JBHI.2024.3505955
Yizhen Luo, Jiahuan Zhang, Siqi Fan, Kai Yang, Massimo Hong, Yushuai Wu, Mu Qiao, Zaiqing Nie
Recent advances in large language models (LLMs) like ChatGPT have shed light on the development of knowledgeable and versatile AI research assistants in various scientific domains. However, they fall short in biomedical applications due to a lack of proprietary biomedical knowledge and deficiencies in handling biological sequences for molecules and proteins. To address these issues, we present BioMedGPT, a multimodal large language model for assisting biomedical research. We first incorporate domain expertise into LLMs by incremental pre-training on large-scale biomedical literature. Then, we harmonize 2D molecular graphs, protein sequences, and natural language within a unified, parameter-efficient fusion architecture by fine-tuning on multimodal question-answering datasets. Through comprehensive experiments, we show that BioMedGPT performs on par with human experts in comprehending biomedical documents and answering research questions. It also exhibits promising capability in analyzing intricate functions and properties of novel molecules and proteins, surpassing state-of-the-art LLMs by 17.1% and 49.8% absolute gains respectively in ROUGE-L on molecule and protein question-answering.
{"title":"BioMedGPT: An Open Multimodal Large Language Model for BioMedicine.","authors":"Yizhen Luo, Jiahuan Zhang, Siqi Fan, Kai Yang, Massimo Hong, Yushuai Wu, Mu Qiao, Zaiqing Nie","doi":"10.1109/JBHI.2024.3505955","DOIUrl":"10.1109/JBHI.2024.3505955","url":null,"abstract":"<p><p>Recent advances in large language models (LLMs) like ChatGPT have shed light on the development of knowledgeable and versatile AI research assistants in various scientific domains. However, they fall short in biomedical applications due to a lack of proprietary biomedical knowledge and deficiencies in handling biological sequences for molecules and proteins. To address these issues, we present BioMedGPT, a multimodal large language model for assisting biomedical research. We first incorporate domain expertise into LLMs by incremental pre-training on large-scale biomedical literature. Then, we harmonize 2D molecular graphs, protein sequences, and natural language within a unified, parameter-efficient fusion architecture by fine-tuning on multimodal question-answering datasets. Through comprehensive experiments, we show that BioMedGPT performs on par with human experts in comprehending biomedical documents and answering research questions. It also exhibits promising capability in analyzing intricate functions and properties of novel molecules and proteins, surpassing state-of-the-art LLMs by 17.1% and 49.8% absolute gains respectively in ROUGE-L on molecule and protein question-answering.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"981-992"},"PeriodicalIF":6.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143604651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1109/JBHI.2025.3606992
Rafic Nader, Vincent L'Allinec, Romain Bourcier, Florent Autrusseau
Intracranial aneurysms (ICA) commonly occur in specific segments of the Circle of Willis (CoW), primarily, onto thirteen major arterial bifurcations. An accurate detection of these critical landmarks is necessary for a prompt and efficient diagnosis. We introduce a fully automated landmark detection approach for CoW bifurcations using a two-step neural networks process. Initially, an object detection network identifies regions of interest (ROIs) proximal to the landmark locations. Subsequently, a modified U-Net with deep supervision is exploited to accurately locate the bifurcations. This two-step method reduces various problems, such as the missed detections caused by two landmarks being close to each other and having similar visual characteristics, especially when processing the complete MRA Time-of-Flight (TOF). Additionally, it accounts for the anatomical variability of the CoW, which affects the number of detectable landmarks per scan. We assessed the effectiveness of our approach using two cerebral MRA datasets: our In-House dataset which had varying numbers of landmarks, and a public dataset with standardized landmark configuration. Our experimental results demonstrate that our method achieves the highest level of performance on a bifurcation detection task.
{"title":"Two-Steps Neural Networks for an Automated Cerebrovascular Landmark Detection Along the Circle of Willis.","authors":"Rafic Nader, Vincent L'Allinec, Romain Bourcier, Florent Autrusseau","doi":"10.1109/JBHI.2025.3606992","DOIUrl":"10.1109/JBHI.2025.3606992","url":null,"abstract":"<p><p>Intracranial aneurysms (ICA) commonly occur in specific segments of the Circle of Willis (CoW), primarily, onto thirteen major arterial bifurcations. An accurate detection of these critical landmarks is necessary for a prompt and efficient diagnosis. We introduce a fully automated landmark detection approach for CoW bifurcations using a two-step neural networks process. Initially, an object detection network identifies regions of interest (ROIs) proximal to the landmark locations. Subsequently, a modified U-Net with deep supervision is exploited to accurately locate the bifurcations. This two-step method reduces various problems, such as the missed detections caused by two landmarks being close to each other and having similar visual characteristics, especially when processing the complete MRA Time-of-Flight (TOF). Additionally, it accounts for the anatomical variability of the CoW, which affects the number of detectable landmarks per scan. We assessed the effectiveness of our approach using two cerebral MRA datasets: our In-House dataset which had varying numbers of landmarks, and a public dataset with standardized landmark configuration. Our experimental results demonstrate that our method achieves the highest level of performance on a bifurcation detection task.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"1353-1364"},"PeriodicalIF":6.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145023207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1109/JBHI.2025.3566167
Daniel Foronda-Pascual, Carmen Camara, Pedro Peris-Lopez
Biometric data are extensively used in modern healthcare systems and is often transmitted over networks for various purposes, raising inherent privacy and security concerns. Wearable devices, smartphones, and Internet of Things (IoT) technologies are common sources of such data, which are susceptible to interception during transmission. To mitigate these risks, cancelable biometrics offer a promising solution by enabling secure and privacy-preserving identification. In this study, we propose a cancelable identification model based on contactless heart signals acquired via continuous-wave radar. The recorded signal, which reflects cardiac motion, is first transformed into a scalogram. Feature extraction is then performed using Convolutional Neural Networks (CNNs), comparing models trained via transfer learning with those trained solely on the dataset. Before classification, the extracted features are converted into cancelable templates using Gaussian Random Projection (GRP), and classification is performed using a Multilayer Perceptron (MLP). The proposed method demonstrates feasibility, achieving 91.20% accuracy across all scenarios in the dataset, which increases to 95.40% when focusing solely on the resting scenario. Additionally, CNNs trained exclusively on the dataset outperform pre-trained models using transfer learning in feature extraction performance.
{"title":"Untouchable and Cancelable Biometrics: Human Identification in Various Physiological States Using Radar-Based Heart Signals.","authors":"Daniel Foronda-Pascual, Carmen Camara, Pedro Peris-Lopez","doi":"10.1109/JBHI.2025.3566167","DOIUrl":"10.1109/JBHI.2025.3566167","url":null,"abstract":"<p><p>Biometric data are extensively used in modern healthcare systems and is often transmitted over networks for various purposes, raising inherent privacy and security concerns. Wearable devices, smartphones, and Internet of Things (IoT) technologies are common sources of such data, which are susceptible to interception during transmission. To mitigate these risks, cancelable biometrics offer a promising solution by enabling secure and privacy-preserving identification. In this study, we propose a cancelable identification model based on contactless heart signals acquired via continuous-wave radar. The recorded signal, which reflects cardiac motion, is first transformed into a scalogram. Feature extraction is then performed using Convolutional Neural Networks (CNNs), comparing models trained via transfer learning with those trained solely on the dataset. Before classification, the extracted features are converted into cancelable templates using Gaussian Random Projection (GRP), and classification is performed using a Multilayer Perceptron (MLP). The proposed method demonstrates feasibility, achieving 91.20% accuracy across all scenarios in the dataset, which increases to 95.40% when focusing solely on the resting scenario. Additionally, CNNs trained exclusively on the dataset outperform pre-trained models using transfer learning in feature extraction performance.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"921-934"},"PeriodicalIF":6.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143984356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1109/JBHI.2025.3584068
Zhanshi Zhu, Qing Dong, Gongning Luo, Wei Wang, Suyu Dong, Kuanquan Wang, Ye Tian, Guohua Wang, Shuo Li
In domain continual medical image segmentation, distillation-based methods mitigate catastrophic forgetting by continuously reviewing old knowledge. However, these approaches often exhibit biases towards both new and old knowledge simultaneously due to confounding factors, which can undermine segmentation performance. To address these biases, we propose the Causality-Adjusted Data Augmentation (CauAug) framework, introducing a novel causal intervention strategy called the Texture-Domain Adjustment Hybrid-Scheme (TDAHS) alongside two causality-targeted data augmentation approaches: the Cross Kernel Network (CKNet) and the Fourier Transformer Generator (FTGen). (1) TDAHS establishes a domain-continual causal model that accounts for two types of knowledge biases by identifying irrelevant local textures (L) and domain-specific features (D) as confounders. It introduces a hybrid causal intervention that combines traditional confounder elimination with a proposed replacement approach to better adapt to domain shifts, thereby promoting causal segmentation. (2) CKNet eliminates confounder L to reduce biases in new knowledge absorption. It decreases reliance on local textures in input images, forcing the model to focus on relevant anatomical structures and thus improving generalization. (3) FTGen causally intervenes on confounder D by selectively replacing it to alleviate biases that impact old knowledge retention. It restores domain-specific features in images, aiding in the comprehensive distillation of old knowledge. Our experiments show that CauAug significantly mitigates catastrophic forgetting and surpasses existing methods in various medical image segmentation tasks.
{"title":"Causality-Adjusted Data Augmentation for Domain Continual Medical Image Segmentation.","authors":"Zhanshi Zhu, Qing Dong, Gongning Luo, Wei Wang, Suyu Dong, Kuanquan Wang, Ye Tian, Guohua Wang, Shuo Li","doi":"10.1109/JBHI.2025.3584068","DOIUrl":"10.1109/JBHI.2025.3584068","url":null,"abstract":"<p><p>In domain continual medical image segmentation, distillation-based methods mitigate catastrophic forgetting by continuously reviewing old knowledge. However, these approaches often exhibit biases towards both new and old knowledge simultaneously due to confounding factors, which can undermine segmentation performance. To address these biases, we propose the Causality-Adjusted Data Augmentation (CauAug) framework, introducing a novel causal intervention strategy called the Texture-Domain Adjustment Hybrid-Scheme (TDAHS) alongside two causality-targeted data augmentation approaches: the Cross Kernel Network (CKNet) and the Fourier Transformer Generator (FTGen). (1) TDAHS establishes a domain-continual causal model that accounts for two types of knowledge biases by identifying irrelevant local textures (L) and domain-specific features (D) as confounders. It introduces a hybrid causal intervention that combines traditional confounder elimination with a proposed replacement approach to better adapt to domain shifts, thereby promoting causal segmentation. (2) CKNet eliminates confounder L to reduce biases in new knowledge absorption. It decreases reliance on local textures in input images, forcing the model to focus on relevant anatomical structures and thus improving generalization. (3) FTGen causally intervenes on confounder D by selectively replacing it to alleviate biases that impact old knowledge retention. It restores domain-specific features in images, aiding in the comprehensive distillation of old knowledge. Our experiments show that CauAug significantly mitigates catastrophic forgetting and surpasses existing methods in various medical image segmentation tasks.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"1429-1442"},"PeriodicalIF":6.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144511815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Magnetocardiography (MCG) enables passive detection of weak magnetic fields generated by the heart with high sensitivity, which can offer valuable information for diagnosing and treating heart conditions. Due to the limitations of the geomagnetic field and unknown magnetic interference, the MCG signals are often overwhelmed by high levels of magnetic noise. In this paper, we propose the design of a high-resolution and movable MCG system comprised of an active-passive coupling magnetic control (AP-CMC) system and a wearable multi-channel signal detection array. The system realizes the MCG measurement at the same time as the AP-CMC system eliminates interference in real time, i.e., simultaneous control and simultaneous measurement. Dynamic MCG signal measurements were successfully conducted, obtaining typical characteristic features of MCG signals. Our method shows promise in enhancing the accuracy and expanding the scope of MCG measurement applications, thereby offering valuable insights for the early diagnosis and precise localization of heart diseases.
{"title":"High-Resolution and Wearable Magnetocardiography (MCG) Measurement With Active-Passive Coupling Magnetic Control Method.","authors":"Shuai Dou, Xikai Liu, Pengfei Song, Yidi Cao, Tong Wen, Rui Feng, Bangcheng Han","doi":"10.1109/JBHI.2025.3584984","DOIUrl":"10.1109/JBHI.2025.3584984","url":null,"abstract":"<p><p>Magnetocardiography (MCG) enables passive detection of weak magnetic fields generated by the heart with high sensitivity, which can offer valuable information for diagnosing and treating heart conditions. Due to the limitations of the geomagnetic field and unknown magnetic interference, the MCG signals are often overwhelmed by high levels of magnetic noise. In this paper, we propose the design of a high-resolution and movable MCG system comprised of an active-passive coupling magnetic control (AP-CMC) system and a wearable multi-channel signal detection array. The system realizes the MCG measurement at the same time as the AP-CMC system eliminates interference in real time, i.e., simultaneous control and simultaneous measurement. Dynamic MCG signal measurements were successfully conducted, obtaining typical characteristic features of MCG signals. Our method shows promise in enhancing the accuracy and expanding the scope of MCG measurement applications, thereby offering valuable insights for the early diagnosis and precise localization of heart diseases.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"1178-1186"},"PeriodicalIF":6.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144540054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1109/JBHI.2025.3587639
Pengfei Wang, Danyang Li, Yaoduo Zhang, Gaofeng Chen, Yongbo Wang, Jianhua Ma, Ji He
Although supervised deep learning methods have made significant advances in low-dose computed tomography (LDCT) image denoising, these approaches typically require pairs of low-dose and normal-dose CT images for training, which are often unavailable in clinical settings. Self-supervised deep learning (SSDL) has great potential to cast off the dependence on paired training datasets. However, existing SSDL methods are limited by the neighboring noise independence assumptions, making them ineffective for handling spatially correlated noises in LDCT images. To address this issue, this paper introduces a novel SSDL approach, named, Noise-Aware Blind Spot Network (NA-BSN), for high-quality LDCT imaging, while mitigating the dependence on the assumption of neighboring noise independence. NA-BSN achieves high-quality image reconstruction without referencing clean data through its explicit noise-aware constraint mechanism during the self-supervised learning process. Specifically, it is experimentally observed and theoretical proven that the $l1$ norm value of CT images in a downsampled space follows a certain descend trend with increasing of the radiation dose, which is then used to construct the explicit noise-aware constraint in the architecture of BSN for self-supervised LDCT image denoising. Various clinical datasets are adopted to validate the performance of the presented NA-BSN method. Experimental results reveal that NA-BSN significantly reduces the spatially correlated CT noises and retains crucial image details in various complex scenarios, such as different types of scanning machines, scanning positions, dose-level settings, and reconstruction kernels.
{"title":"BSN With Explicit Noise-Aware Constraint for Self-Supervised Low-Dose CT Denoising.","authors":"Pengfei Wang, Danyang Li, Yaoduo Zhang, Gaofeng Chen, Yongbo Wang, Jianhua Ma, Ji He","doi":"10.1109/JBHI.2025.3587639","DOIUrl":"10.1109/JBHI.2025.3587639","url":null,"abstract":"<p><p>Although supervised deep learning methods have made significant advances in low-dose computed tomography (LDCT) image denoising, these approaches typically require pairs of low-dose and normal-dose CT images for training, which are often unavailable in clinical settings. Self-supervised deep learning (SSDL) has great potential to cast off the dependence on paired training datasets. However, existing SSDL methods are limited by the neighboring noise independence assumptions, making them ineffective for handling spatially correlated noises in LDCT images. To address this issue, this paper introduces a novel SSDL approach, named, Noise-Aware Blind Spot Network (NA-BSN), for high-quality LDCT imaging, while mitigating the dependence on the assumption of neighboring noise independence. NA-BSN achieves high-quality image reconstruction without referencing clean data through its explicit noise-aware constraint mechanism during the self-supervised learning process. Specifically, it is experimentally observed and theoretical proven that the $l1$ norm value of CT images in a downsampled space follows a certain descend trend with increasing of the radiation dose, which is then used to construct the explicit noise-aware constraint in the architecture of BSN for self-supervised LDCT image denoising. Various clinical datasets are adopted to validate the performance of the presented NA-BSN method. Experimental results reveal that NA-BSN significantly reduces the spatially correlated CT noises and retains crucial image details in various complex scenarios, such as different types of scanning machines, scanning positions, dose-level settings, and reconstruction kernels.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"1286-1299"},"PeriodicalIF":6.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144608189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1109/JBHI.2025.3636169
Yasaman Baradaran, Raul Fernandez Rojas, Roland Goecke, Maryam Ghahramani
The prefrontal cortex (PFC) of the brain is involved in processing visual, vestibular, and somatosensory inputs to stabilise postural balance. However, the PFC's activation map for a standing person and different sensory inputs remains unclear. This study aimed to explore the PFC activity map and distinct haemodynamic responses during postural control when sensory inputs change. To this end, functional near-infrared spectroscopy (fNIRS) was employed to capture the haemodynamic responses throughout the PFC from a group of young adults standing in four sensory conditions. The results revealed distinct PFC activation patterns supporting sensory processing, motor planning, and cognitive control to maintain balance under different degraded sensory conditions. Furthermore, by applying machine learning classifiers and multivariate feature selection, the PFC locations and haemodynamic responses indicative of different sensory conditions were identified. The findings of this study offer valuable insights for optimising rehabilitation approaches, enhancing the design of fNIRS studies, and advancing brain-computer interface technologies for balance assessment and training.
{"title":"Exploring Prefrontal Cortex Involvement in Postural Control Across Degraded Sensory Conditions Using fNIRS and Classification.","authors":"Yasaman Baradaran, Raul Fernandez Rojas, Roland Goecke, Maryam Ghahramani","doi":"10.1109/JBHI.2025.3636169","DOIUrl":"10.1109/JBHI.2025.3636169","url":null,"abstract":"<p><p>The prefrontal cortex (PFC) of the brain is involved in processing visual, vestibular, and somatosensory inputs to stabilise postural balance. However, the PFC's activation map for a standing person and different sensory inputs remains unclear. This study aimed to explore the PFC activity map and distinct haemodynamic responses during postural control when sensory inputs change. To this end, functional near-infrared spectroscopy (fNIRS) was employed to capture the haemodynamic responses throughout the PFC from a group of young adults standing in four sensory conditions. The results revealed distinct PFC activation patterns supporting sensory processing, motor planning, and cognitive control to maintain balance under different degraded sensory conditions. Furthermore, by applying machine learning classifiers and multivariate feature selection, the PFC locations and haemodynamic responses indicative of different sensory conditions were identified. The findings of this study offer valuable insights for optimising rehabilitation approaches, enhancing the design of fNIRS studies, and advancing brain-computer interface technologies for balance assessment and training.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"1418-1428"},"PeriodicalIF":6.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145596307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In modern medicine, the widespread use of medical imaging has greatly improved diagnostic and treatment efficiency. However, these images contain sensitive personal information, and any leakage could seriously compromise patient privacy, leading to ethical and legal issues. Federated learning (FL), an emerging privacy-preserving technique, transmits gradients rather than raw data for model training. Yet, recent studies reveal that gradient inversion attacks can exploit this information to reconstruct private data, posing a significant threat to FL. Current attacks remain limited in image resolution, similarity, and batch processing, and thus do not yet pose a significant risk to FL. To address this, we propose a novel gradient inversion attack based on sparsified gradient matching and segmentation reorganization (SR) to reconstruct high-resolution, high-similarity medical images in batch mode. Specifically, an $L_{1}$ loss function optimises the gradient sparsification process, while the SR strategy enhances image resolution. An adaptive learning rate adjustment mechanism is also employed to improve optimisation stability and avoid local optima. Experimental results demonstrate that our method significantly outperforms state-of-the-art approaches in both visual quality and quantitative metrics, achieving up to a 146% improvement in similarity.
{"title":"Medical Image Privacy in Federated Learning: Segmentation-Reorganization and Sparsified Gradient Matching Attacks.","authors":"Kaimin Wei, Jin Qian, Chengkun Jia, Jinpeng Chen, Jilian Zhang, Yongdong Wu, Jinyu Zhu, Yuhan Guo","doi":"10.1109/JBHI.2025.3593631","DOIUrl":"10.1109/JBHI.2025.3593631","url":null,"abstract":"<p><p>In modern medicine, the widespread use of medical imaging has greatly improved diagnostic and treatment efficiency. However, these images contain sensitive personal information, and any leakage could seriously compromise patient privacy, leading to ethical and legal issues. Federated learning (FL), an emerging privacy-preserving technique, transmits gradients rather than raw data for model training. Yet, recent studies reveal that gradient inversion attacks can exploit this information to reconstruct private data, posing a significant threat to FL. Current attacks remain limited in image resolution, similarity, and batch processing, and thus do not yet pose a significant risk to FL. To address this, we propose a novel gradient inversion attack based on sparsified gradient matching and segmentation reorganization (SR) to reconstruct high-resolution, high-similarity medical images in batch mode. Specifically, an $L_{1}$ loss function optimises the gradient sparsification process, while the SR strategy enhances image resolution. An adaptive learning rate adjustment mechanism is also employed to improve optimisation stability and avoid local optima. Experimental results demonstrate that our method significantly outperforms state-of-the-art approaches in both visual quality and quantitative metrics, achieving up to a 146% improvement in similarity.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"1443-1451"},"PeriodicalIF":6.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144952133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1109/JBHI.2025.3595371
Patricia A Apellaniz, Borja Arroyo Galende, Ana Jimenez, Juan Parras, Santiago Zazo
The scarcity of medical data, particularly in Survival Analysis (SA) for cancer-related diseases, challenges data-driven healthcare research. While Synthetic Tabular Data Generation (STDG) models have been proposed to address this issue, most rely on datasets with abundant samples, which do not reflect real-world limitations. We suggest using an STDG approach that leverages transfer learning and meta-learning techniques to create an artificial inductive bias, guiding generative models trained on limited samples. Experiments on classification datasets across varying sample sizes validated the method's robustness, with further clinical utility assessment on cancer-related SA data. While divergence-based similarity validation proved effective in capturing improvements in generation quality, clinical utility validation showed limited sensitivity to sample size, highlighting its shortcomings. In SA experiments, we observed that altering the task can reveal if relationships among variables are accurately generated, with most cases benefiting from the proposed methodology. Our findings confirm the method's ability to generate high-quality synthetic data under constrained conditions. We emphasize the need to complement utility-based validation with similarity metrics, particularly in low-data settings, to assess STDG performance reliably.
{"title":"Advancing Cancer Research With Synthetic Data Generation in Low-Data Scenarios.","authors":"Patricia A Apellaniz, Borja Arroyo Galende, Ana Jimenez, Juan Parras, Santiago Zazo","doi":"10.1109/JBHI.2025.3595371","DOIUrl":"10.1109/JBHI.2025.3595371","url":null,"abstract":"<p><p>The scarcity of medical data, particularly in Survival Analysis (SA) for cancer-related diseases, challenges data-driven healthcare research. While Synthetic Tabular Data Generation (STDG) models have been proposed to address this issue, most rely on datasets with abundant samples, which do not reflect real-world limitations. We suggest using an STDG approach that leverages transfer learning and meta-learning techniques to create an artificial inductive bias, guiding generative models trained on limited samples. Experiments on classification datasets across varying sample sizes validated the method's robustness, with further clinical utility assessment on cancer-related SA data. While divergence-based similarity validation proved effective in capturing improvements in generation quality, clinical utility validation showed limited sensitivity to sample size, highlighting its shortcomings. In SA experiments, we observed that altering the task can reveal if relationships among variables are accurately generated, with most cases benefiting from the proposed methodology. Our findings confirm the method's ability to generate high-quality synthetic data under constrained conditions. We emphasize the need to complement utility-based validation with similarity metrics, particularly in low-data settings, to assess STDG performance reliably.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"1666-1679"},"PeriodicalIF":6.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144784194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}