Objective: Fluorescence cell counting is vital in biomedical research, yet existing automated methods lack sufficient adaptability and accuracy, leading to persistent errors in complex microscopy images. This study aims to propose an adaptive, interactive approach to effectively overcome these limitations.
Methods: We introduce the Adaptive Interactive Cell Counting (AICC) framework, combining a coordinate-based prediction module with user-guided correction. Specifically, we develop two novel global correction algorithms, Proposal Expansion (PE) and Prediction Filtering (PF), coupled with a new RGB-Aware Structural Similarity (RGB-Aware SSIM) metric to identify visually similar regions and efficiently propagate minimal user corrections. Additionally, we release NEFCell, a new high-resolution fluorescence microscopy dataset designed explicitly for evaluating interactive cell counting methods.
Results: Extensive evaluations show that AICC significantly surpasses current state-of-the-art methods, reducing counting errors by up to 36.8% compared to non-interactive approaches and up to 65.3% compared to existing interactive methods, while improving localization accuracy by 7.3% on average and significantly minimizing interaction time.
Conclusion: The proposed AICC framework substantially enhances accuracy and reduces effort required for fluorescence cell counting, proving its effectiveness in integrating automation with user expertise.
Significance: AICC represents a valuable tool for biomedical researchers and clinicians, facilitating precise and efficient cell analyses in complex experimental and clinical contexts.
{"title":"Interactive Fluorescence Cell Counting via User-Guided Correction.","authors":"Haodi Zhong, Rongjing Zhou, Di Wang, Zili Wu, Pingping Li, Rui Jia","doi":"10.1109/TBME.2026.3661595","DOIUrl":"https://doi.org/10.1109/TBME.2026.3661595","url":null,"abstract":"<p><strong>Objective: </strong>Fluorescence cell counting is vital in biomedical research, yet existing automated methods lack sufficient adaptability and accuracy, leading to persistent errors in complex microscopy images. This study aims to propose an adaptive, interactive approach to effectively overcome these limitations.</p><p><strong>Methods: </strong>We introduce the Adaptive Interactive Cell Counting (AICC) framework, combining a coordinate-based prediction module with user-guided correction. Specifically, we develop two novel global correction algorithms, Proposal Expansion (PE) and Prediction Filtering (PF), coupled with a new RGB-Aware Structural Similarity (RGB-Aware SSIM) metric to identify visually similar regions and efficiently propagate minimal user corrections. Additionally, we release NEFCell, a new high-resolution fluorescence microscopy dataset designed explicitly for evaluating interactive cell counting methods.</p><p><strong>Results: </strong>Extensive evaluations show that AICC significantly surpasses current state-of-the-art methods, reducing counting errors by up to 36.8% compared to non-interactive approaches and up to 65.3% compared to existing interactive methods, while improving localization accuracy by 7.3% on average and significantly minimizing interaction time.</p><p><strong>Conclusion: </strong>The proposed AICC framework substantially enhances accuracy and reduces effort required for fluorescence cell counting, proving its effectiveness in integrating automation with user expertise.</p><p><strong>Significance: </strong>AICC represents a valuable tool for biomedical researchers and clinicians, facilitating precise and efficient cell analyses in complex experimental and clinical contexts.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146131746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1109/TBME.2026.3661416
Sina Parsnejad, Jan W Brascamp, Galit Pelled, Andrew J Mason
Tactile stimulation, especially electrotactile stimulation, have been a subject of interest in recent literature for machine-to-human communication (M2HC) of electronically gathered information for the purpose of augmenting and improving the human experience. Electrotactile is a direct noninvasive method for peripheral nerve stimulation that provides a pathway for communication with the brain. However, the widespread use of electrotactile as an M2HC pathway is hampered by the availability and ease of use of mainstream, visual and audio, communication methods and technological challenges with electrotactile stimulation that must be resolved, such as skin condition dependency, neural adaptation, and the lack of a framework for producing consistent electrotactile M2HC. As such, this paper (1) reviews the scientific and engineering literature associated with electrotactile stimulation and associated electronics with a goal of converging disciplinary knowledge of this topic, (2) summarizes recent advances and open challenges in electrotactile stimulation, and (3) discusses available techniques and introduces a unifying model for icon-based electrotactile communication. In contrast to prior review papers on the subject, this paper uniquely focuses on defining electrotactile stimulation as a method for robust machine-to-human communication while compiling and discussing relevant engineering, physiology, and neuroscience issues, thus providing a comprehensive understanding of electrotactile M2HC for the IEEE community.
{"title":"A review of electrotactile stimulation for machine-to-human communication.","authors":"Sina Parsnejad, Jan W Brascamp, Galit Pelled, Andrew J Mason","doi":"10.1109/TBME.2026.3661416","DOIUrl":"https://doi.org/10.1109/TBME.2026.3661416","url":null,"abstract":"<p><p>Tactile stimulation, especially electrotactile stimulation, have been a subject of interest in recent literature for machine-to-human communication (M2HC) of electronically gathered information for the purpose of augmenting and improving the human experience. Electrotactile is a direct noninvasive method for peripheral nerve stimulation that provides a pathway for communication with the brain. However, the widespread use of electrotactile as an M2HC pathway is hampered by the availability and ease of use of mainstream, visual and audio, communication methods and technological challenges with electrotactile stimulation that must be resolved, such as skin condition dependency, neural adaptation, and the lack of a framework for producing consistent electrotactile M2HC. As such, this paper (1) reviews the scientific and engineering literature associated with electrotactile stimulation and associated electronics with a goal of converging disciplinary knowledge of this topic, (2) summarizes recent advances and open challenges in electrotactile stimulation, and (3) discusses available techniques and introduces a unifying model for icon-based electrotactile communication. In contrast to prior review papers on the subject, this paper uniquely focuses on defining electrotactile stimulation as a method for robust machine-to-human communication while compiling and discussing relevant engineering, physiology, and neuroscience issues, thus providing a comprehensive understanding of electrotactile M2HC for the IEEE community.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146118936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1109/TBME.2026.3661297
Marius Briel, Ludwig Haide, Mathias Reincke, Rebekka Peter, Nicola Piccinelli, Gernot Kronreif, Franziska Mathis-Ullrich, Eleonora Tagliabue
Objective: Micrometer-scale precision is vital for patient safety in ophthalmic surgery. Recent advancements in instrument-integrated optical sensors aim to accurately measure instrument-to-tissue distances. However, the reliability of these measurements is often hindered by segmentation errors caused by artifacts in the signal.
Methods: We propose a deep learning framework to identify optical coherence tomography (OCT) M-scans that fall outside the expected distribution. Our approach incorporates adaptive remote center of motion (RCM)-informed retinal modeling along with time series analysis to effectively detect and rectify segmentation errors. This method estimates retinal distances and their associated confidence levels by leveraging retinal models, instrument positions, and validated distance data.
Results: Validation tests conducted on ex vivo human eyes reveal that our pipeline achieves an 88.8% accuracy in identifying out-of-distribution (OOD) measurements. Furthermore, distance estimation improved by 89% and 93% when compared to two existing methods, resulting in an overall mean absolute error (MAE) of less than 40 μm across diverse conditions, including scans with blood and obstructions.
Conclusion: This research enhances the accuracy of instrument-to-retina distance estimation, thereby contributing to improved patient safety in ophthalmic surgical procedures.
Significance: The proposed method has potential applications beyond ophthalmic surgery, offering benefits to a variety of surgical disciplines and sensorequipped instruments.
{"title":"Robust Distance Estimation with Out-of-distribution Detection in Ophthalmic Surgery.","authors":"Marius Briel, Ludwig Haide, Mathias Reincke, Rebekka Peter, Nicola Piccinelli, Gernot Kronreif, Franziska Mathis-Ullrich, Eleonora Tagliabue","doi":"10.1109/TBME.2026.3661297","DOIUrl":"https://doi.org/10.1109/TBME.2026.3661297","url":null,"abstract":"<p><strong>Objective: </strong>Micrometer-scale precision is vital for patient safety in ophthalmic surgery. Recent advancements in instrument-integrated optical sensors aim to accurately measure instrument-to-tissue distances. However, the reliability of these measurements is often hindered by segmentation errors caused by artifacts in the signal.</p><p><strong>Methods: </strong>We propose a deep learning framework to identify optical coherence tomography (OCT) M-scans that fall outside the expected distribution. Our approach incorporates adaptive remote center of motion (RCM)-informed retinal modeling along with time series analysis to effectively detect and rectify segmentation errors. This method estimates retinal distances and their associated confidence levels by leveraging retinal models, instrument positions, and validated distance data.</p><p><strong>Results: </strong>Validation tests conducted on ex vivo human eyes reveal that our pipeline achieves an 88.8% accuracy in identifying out-of-distribution (OOD) measurements. Furthermore, distance estimation improved by 89% and 93% when compared to two existing methods, resulting in an overall mean absolute error (MAE) of less than 40 μm across diverse conditions, including scans with blood and obstructions.</p><p><strong>Conclusion: </strong>This research enhances the accuracy of instrument-to-retina distance estimation, thereby contributing to improved patient safety in ophthalmic surgical procedures.</p><p><strong>Significance: </strong>The proposed method has potential applications beyond ophthalmic surgery, offering benefits to a variety of surgical disciplines and sensorequipped instruments.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146118844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1109/TBME.2026.3661176
Fei Liang, Xin Shi, Hao Lu, Pengjie Qin, Liangwen Huang, Zixiang Yang, Yao Liu
Objective: To address the critical challenge of providing accurate, real-time lower-limb joint torque estimation across diverse locomotion conditions for adaptive human-exoskeleton interaction.
Methods: We developed a novel dual-branch architecture that synergizes temporal convolutional networks (TCN) and transformers to process surface electromyography and kinematic data. The TCN captures local temporal dynamics, while the transformer extracts global dependencies. A joint-specific task-aware residual fusion mechanism was introduced to dynamically synthesize these features, employing residual enhancement to adapt precisely to the distinct biomechanics of individual joints.
Results: Validated across twelve diverse locomotion patterns, the framework achieved root mean square errors (Nm/kg) and Pearson correlation coefficients of 0.1655/0.9904 (ankle), 0.1405/0.9588 (knee), and 0.1975/0.9698 (hip). It maintained a 4.2912 ms latency and showed strong adaptability on public datasets.
Conclusion: The proposed method effectively balances high estimation accuracy with the strict computational efficiency needed for real-time applications, successfully addressing previous issues in adapting to dynamic environments.
Significance: This work advances biomedical engineering by providing a fast, reliable solution for adaptive exoskeleton torque control, significantly enhancing seamless and natural human-robot interaction in assistive exoskeleton technologies.
{"title":"Dual-Branch Fusion Network: Precise Decoding of Lower Limb Multi-Joint Torque.","authors":"Fei Liang, Xin Shi, Hao Lu, Pengjie Qin, Liangwen Huang, Zixiang Yang, Yao Liu","doi":"10.1109/TBME.2026.3661176","DOIUrl":"https://doi.org/10.1109/TBME.2026.3661176","url":null,"abstract":"<p><strong>Objective: </strong>To address the critical challenge of providing accurate, real-time lower-limb joint torque estimation across diverse locomotion conditions for adaptive human-exoskeleton interaction.</p><p><strong>Methods: </strong>We developed a novel dual-branch architecture that synergizes temporal convolutional networks (TCN) and transformers to process surface electromyography and kinematic data. The TCN captures local temporal dynamics, while the transformer extracts global dependencies. A joint-specific task-aware residual fusion mechanism was introduced to dynamically synthesize these features, employing residual enhancement to adapt precisely to the distinct biomechanics of individual joints.</p><p><strong>Results: </strong>Validated across twelve diverse locomotion patterns, the framework achieved root mean square errors (Nm/kg) and Pearson correlation coefficients of 0.1655/0.9904 (ankle), 0.1405/0.9588 (knee), and 0.1975/0.9698 (hip). It maintained a 4.2912 ms latency and showed strong adaptability on public datasets.</p><p><strong>Conclusion: </strong>The proposed method effectively balances high estimation accuracy with the strict computational efficiency needed for real-time applications, successfully addressing previous issues in adapting to dynamic environments.</p><p><strong>Significance: </strong>This work advances biomedical engineering by providing a fast, reliable solution for adaptive exoskeleton torque control, significantly enhancing seamless and natural human-robot interaction in assistive exoskeleton technologies.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146118921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1109/TBME.2026.3661029
Paulo Sampaio, Davide Scandella, C H Lucas Patty, Pablo Marquez-Neila, Heather DiFazio, Martin Wartenberg, Federico Storni, Brice-Olivier Demory, Daniel Candinas, Aurel Perren, Raphael Sznitman
Background: Frozen section (FS) tissue assessment is essential for guiding intraoperative surgical decision-making in oncology, particularly in procedures such as pancreatic ductal adenocarcinoma (PDAC) resections, where margin status critically impacts patient survival. The current gold standard, (FS), while widely used, suffers from notable limitations, including tissue artifacts, dependence on specialized expertise, and slow turnaround times, resulting in sampling errors and false negatives.
Methods: To address these challenges, we present a novel approach for automatic cancer identification in fresh tissue biopsies using mul tispectral Mueller Matrix (MM) polarimetry. Our custom-built multispectral MM polarimeter captures polarization-resolved imaging across multiple wavelengths, enabling pixel-level analysis of tissue microstructure without staining or histology sectioning. Our approach thus allows for assessments in quasi-real time. From these, we propose a deep learning model that uses MM data collected from PDAC patients to distinguish cancerous from non-cancerous biopsies to assess samples automatically.
Results: Experimental results demonstrate classification performance comparable to RFS assessments performance found in clinical routine, with enhanced diagnostic speed. We show that our approach is consistent and coherent against pixel-wise annotations from histology slides.
Conclusion: This study highlights the potential of MM polarimetry combined with machine learning as a viable, label-free alternative for real-time intraoperative cancer detection.
{"title":"Rapid, label-free cancer detection in fresh pancreatic tissue using deep learning and multispectral Mueller matrix polarimetry.","authors":"Paulo Sampaio, Davide Scandella, C H Lucas Patty, Pablo Marquez-Neila, Heather DiFazio, Martin Wartenberg, Federico Storni, Brice-Olivier Demory, Daniel Candinas, Aurel Perren, Raphael Sznitman","doi":"10.1109/TBME.2026.3661029","DOIUrl":"https://doi.org/10.1109/TBME.2026.3661029","url":null,"abstract":"<p><strong>Background: </strong>Frozen section (FS) tissue assessment is essential for guiding intraoperative surgical decision-making in oncology, particularly in procedures such as pancreatic ductal adenocarcinoma (PDAC) resections, where margin status critically impacts patient survival. The current gold standard, (FS), while widely used, suffers from notable limitations, including tissue artifacts, dependence on specialized expertise, and slow turnaround times, resulting in sampling errors and false negatives.</p><p><strong>Methods: </strong>To address these challenges, we present a novel approach for automatic cancer identification in fresh tissue biopsies using mul tispectral Mueller Matrix (MM) polarimetry. Our custom-built multispectral MM polarimeter captures polarization-resolved imaging across multiple wavelengths, enabling pixel-level analysis of tissue microstructure without staining or histology sectioning. Our approach thus allows for assessments in quasi-real time. From these, we propose a deep learning model that uses MM data collected from PDAC patients to distinguish cancerous from non-cancerous biopsies to assess samples automatically.</p><p><strong>Results: </strong>Experimental results demonstrate classification performance comparable to RFS assessments performance found in clinical routine, with enhanced diagnostic speed. We show that our approach is consistent and coherent against pixel-wise annotations from histology slides.</p><p><strong>Conclusion: </strong>This study highlights the potential of MM polarimetry combined with machine learning as a viable, label-free alternative for real-time intraoperative cancer detection.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146118846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1109/TBME.2026.3660307
Zheping Wang, Chengye Lin, Kai Chen
Objective: Physiological time series reflect the underlying behavior of physiological systems. In this paper, we introduce a novel patching with sequential updating for Bayesian nonparametric spectral estimation (PBNSE) to enhance spectral estimation and interpretation of imperfect physiological time series with fragmented, noncontiguous segments.
Methods: PBNSE incorporates four key strategies: (1) modeling patches as patch-specific Gaussian processes (GPs); (2) patch-dependence, where each patch involves a joint GP with a shared kernel, capturing both observation and spectral dependencies across all patches; (3) sequential parameter shift that transfers knowledge between patches while maintaining computational traceability; and (4) aggregating patch-level posterior spectra into a unified power spectral density (PSD) estimate and computing the expectation of the PSD in a closed form.
Results: Extensive experiments demonstrate significant improvements in spectral accuracy and robustness compared to state-of-the-art methods such as BNSE, multitaper, periodogram, Lomb-Scargle, functional kernel learning (FKL), and variational sparse spectrum (SVSS).
Conclusion: PBNSE addresses key challenges in physiological signal analysis, including irregular sampling, incomplete signal, and varying noise.
Significance: The widespread adoption of PBNSE in physiological signal research has the potential to enhance the accuracy of spectral estimation and improve the robustness of interpreting complex, real-world physiological time series.
{"title":"Patching with Sequential Updating for High-Fidelity Bayesian Spectral Estimation of Physiological Time Series.","authors":"Zheping Wang, Chengye Lin, Kai Chen","doi":"10.1109/TBME.2026.3660307","DOIUrl":"https://doi.org/10.1109/TBME.2026.3660307","url":null,"abstract":"<p><strong>Objective: </strong>Physiological time series reflect the underlying behavior of physiological systems. In this paper, we introduce a novel patching with sequential updating for Bayesian nonparametric spectral estimation (PBNSE) to enhance spectral estimation and interpretation of imperfect physiological time series with fragmented, noncontiguous segments.</p><p><strong>Methods: </strong>PBNSE incorporates four key strategies: (1) modeling patches as patch-specific Gaussian processes (GPs); (2) patch-dependence, where each patch involves a joint GP with a shared kernel, capturing both observation and spectral dependencies across all patches; (3) sequential parameter shift that transfers knowledge between patches while maintaining computational traceability; and (4) aggregating patch-level posterior spectra into a unified power spectral density (PSD) estimate and computing the expectation of the PSD in a closed form.</p><p><strong>Results: </strong>Extensive experiments demonstrate significant improvements in spectral accuracy and robustness compared to state-of-the-art methods such as BNSE, multitaper, periodogram, Lomb-Scargle, functional kernel learning (FKL), and variational sparse spectrum (SVSS).</p><p><strong>Conclusion: </strong>PBNSE addresses key challenges in physiological signal analysis, including irregular sampling, incomplete signal, and varying noise.</p><p><strong>Significance: </strong>The widespread adoption of PBNSE in physiological signal research has the potential to enhance the accuracy of spectral estimation and improve the robustness of interpreting complex, real-world physiological time series.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146113231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1109/TBME.2026.3660806
Shini Renjith, Karthik Gopalakrishnan, Tobias Loddenkemper, Daniel Friedman, Mark Spitz, Mitchell A Frankel, Mark J Lehmkuhle, V John Mathews
Objective: This paper presents a two-stage machine learning model for electrographic seizure detection using wearable single-channel scalp electroencephalogram (EEG) sensors.
Methods: The algorithm first detects seizure in short, nonoverlapping segments. The binary decisions made by Stage-I as ictals are fed to Stage-II with the goal of reducing the false alert rate (FAR). A post-processing framework is applied to the segment-level binary results to create event-level decisions.
Results: The performance of the two-stage system for detecting electrographically focal seizures was evaluated on EEGs recorded in a multi-center study. The two-stage algorithm exhibited increased sensitivity and reduced FAR when compared to singlestage models. For example, a two-stage model employing a balanced bagging classifier for Stage-I and a gradient boosting classifier for Stage-II improved the sensitivity of seizure detection from 61 $boldsymbol{pm }$ 5.9% to 75 $boldsymbol{pm }$ 6.6% while reducing the FAR from 3.3 $boldsymbol{pm }$ 0.3/hr to 2.4 $boldsymbol{pm }$ 0.3/hr.
Conclusion: The two-stage algorithm of this paper exhibited statistically significant performance improvement in detecting electrographically focal seizures over single-stage approaches. In addition, adding memory at the input of Stage-I and incorporating an iterative learning algorithm in Stage-I statistically significantly improved the performance of the first stage.
Significance: The performance of the two-stage method for single-channel seizure detection suggests its potential to enhance support systems used by epileptologists for post-hoc reviews. This system may represent the beginning of the roadmap for long-duration seizure monitoring using wearable single-channel EEG sensors during activities of daily life.
{"title":"A two-stage algorithm to detect electrographically focal seizures using a wearable single-channel EEG sensor.","authors":"Shini Renjith, Karthik Gopalakrishnan, Tobias Loddenkemper, Daniel Friedman, Mark Spitz, Mitchell A Frankel, Mark J Lehmkuhle, V John Mathews","doi":"10.1109/TBME.2026.3660806","DOIUrl":"https://doi.org/10.1109/TBME.2026.3660806","url":null,"abstract":"<p><strong>Objective: </strong>This paper presents a two-stage machine learning model for electrographic seizure detection using wearable single-channel scalp electroencephalogram (EEG) sensors.</p><p><strong>Methods: </strong>The algorithm first detects seizure in short, nonoverlapping segments. The binary decisions made by Stage-I as ictals are fed to Stage-II with the goal of reducing the false alert rate (FAR). A post-processing framework is applied to the segment-level binary results to create event-level decisions.</p><p><strong>Results: </strong>The performance of the two-stage system for detecting electrographically focal seizures was evaluated on EEGs recorded in a multi-center study. The two-stage algorithm exhibited increased sensitivity and reduced FAR when compared to singlestage models. For example, a two-stage model employing a balanced bagging classifier for Stage-I and a gradient boosting classifier for Stage-II improved the sensitivity of seizure detection from 61 $boldsymbol{pm }$ 5.9% to 75 $boldsymbol{pm }$ 6.6% while reducing the FAR from 3.3 $boldsymbol{pm }$ 0.3/hr to 2.4 $boldsymbol{pm }$ 0.3/hr.</p><p><strong>Conclusion: </strong>The two-stage algorithm of this paper exhibited statistically significant performance improvement in detecting electrographically focal seizures over single-stage approaches. In addition, adding memory at the input of Stage-I and incorporating an iterative learning algorithm in Stage-I statistically significantly improved the performance of the first stage.</p><p><strong>Significance: </strong>The performance of the two-stage method for single-channel seizure detection suggests its potential to enhance support systems used by epileptologists for post-hoc reviews. This system may represent the beginning of the roadmap for long-duration seizure monitoring using wearable single-channel EEG sensors during activities of daily life.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146113065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The hybrid EEG-fNIRS Brain-computer interface (BCI) combines the high temporal resolution of electroencephalography (EEG) with the high spatial resolution of functional near-infrared spectroscopy (fNIRS) to enable comprehensive brain activity detection. However, integrating these modalities to obtain highly discriminative features remains challenging. Most existing methods fail to effectively capture the spatiotemporal coupling features and correlations between EEG and fNIRS signals. Furthermore, these methods adopt a holistic learning paradigm for the representation of each modality, leading to unrefined and redundant multimodal representations. To address these challenges, we propose a disentangled multimodal spatiotemporal learning (DMSL) method for hybrid EEG-fNIRS BCI systems, which simultaneously performs multimodal spatiotemporal coupling and disentangled representation learning within a unified framework. Specifically, DMSL utilizes a compact convolutional module with one-dimensional temporal and spatial convolution layers to extract complex spatiotemporal patterns from each modality and introduces a multimodal attention interaction module to comprehensively capture the inter-modality correlations, enhancing the representations for each modality. Subsequently, DMSL designs an adaptive multi-branch graph convolutional module based on reconstructed channels to effectively capture the spatiotemporal coupling features, incorporating modality consistency and disparity constraints to disentangle common and modality-specific representations for each modality. These disentangled representations are finally adaptively fused to perform different task predictions. The proposed DMSL demonstrates state-of-the-art performance on publicly available datasets for mental arithmetic, motor imagery, and emotion recognition tasks, exceeding the best baselines by 2.34%, 0.59%, and 1.47%, respectively. These results demonstrate the effectiveness of DMSL in improving EEG-fNIRS decoding and its strong generalization ability in BCI applications.
{"title":"Disentangled Multimodal Spatiotemporal Learning for Hybrid EEG-fNIRS Brain-Computer Interface.","authors":"Yun Xu, Chi-Man Vong, Zihao Xu, Jianlin Fu, Junhua Li, Chuangquan Chen","doi":"10.1109/TBME.2026.3660692","DOIUrl":"https://doi.org/10.1109/TBME.2026.3660692","url":null,"abstract":"<p><p>The hybrid EEG-fNIRS Brain-computer interface (BCI) combines the high temporal resolution of electroencephalography (EEG) with the high spatial resolution of functional near-infrared spectroscopy (fNIRS) to enable comprehensive brain activity detection. However, integrating these modalities to obtain highly discriminative features remains challenging. Most existing methods fail to effectively capture the spatiotemporal coupling features and correlations between EEG and fNIRS signals. Furthermore, these methods adopt a holistic learning paradigm for the representation of each modality, leading to unrefined and redundant multimodal representations. To address these challenges, we propose a disentangled multimodal spatiotemporal learning (DMSL) method for hybrid EEG-fNIRS BCI systems, which simultaneously performs multimodal spatiotemporal coupling and disentangled representation learning within a unified framework. Specifically, DMSL utilizes a compact convolutional module with one-dimensional temporal and spatial convolution layers to extract complex spatiotemporal patterns from each modality and introduces a multimodal attention interaction module to comprehensively capture the inter-modality correlations, enhancing the representations for each modality. Subsequently, DMSL designs an adaptive multi-branch graph convolutional module based on reconstructed channels to effectively capture the spatiotemporal coupling features, incorporating modality consistency and disparity constraints to disentangle common and modality-specific representations for each modality. These disentangled representations are finally adaptively fused to perform different task predictions. The proposed DMSL demonstrates state-of-the-art performance on publicly available datasets for mental arithmetic, motor imagery, and emotion recognition tasks, exceeding the best baselines by 2.34%, 0.59%, and 1.47%, respectively. These results demonstrate the effectiveness of DMSL in improving EEG-fNIRS decoding and its strong generalization ability in BCI applications.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146113174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1109/TBME.2026.3660874
Yuepeng Qian, Jingfeng Xiong, Haoyong Yu, Chenglong Fu
Gait asymmetry is a significant clinical characteristic of hemiplegic gait that most stroke survivors suffer, leading to limited mobility and long-term negative impacts on their quality of life. Although a variety of exoskeleton controls have been developed for robot-aided gait rehabilitation, little attention has been paid to correcting the gait asymmetry of stroke patients, and it remains challenging to properly share control between the exoskeleton and patients with partial motor control. In view of this, an assist-as-needed (AAN) hip exoskeleton control with human-in-the-loop optimization is proposed to correct gait asymmetry in hemiplegic gait. To realize the AAN concept, an objective function was designed for real-time evaluation of the subject's gait performance and active participation, which considers the variability of natural human movement and guides the online tuning of control parameters on a subject-specific basis. In this way, subjects were stimulated to contribute as much as possible to movement, thus maximizing the efficiency and outcomes of gait rehabilitation. Finally, an experimental study was conducted to verify the feasibility of the proposed control with simulated hemiplegic gait, and the common hypothesis that AAN controls can improve active human participation was clearly validated from a biomechanics perspective.
{"title":"Assist-as-needed Hip Exoskeleton Control for Gait Asymmetry Correction via Human-in-the-loop Optimization.","authors":"Yuepeng Qian, Jingfeng Xiong, Haoyong Yu, Chenglong Fu","doi":"10.1109/TBME.2026.3660874","DOIUrl":"https://doi.org/10.1109/TBME.2026.3660874","url":null,"abstract":"<p><p>Gait asymmetry is a significant clinical characteristic of hemiplegic gait that most stroke survivors suffer, leading to limited mobility and long-term negative impacts on their quality of life. Although a variety of exoskeleton controls have been developed for robot-aided gait rehabilitation, little attention has been paid to correcting the gait asymmetry of stroke patients, and it remains challenging to properly share control between the exoskeleton and patients with partial motor control. In view of this, an assist-as-needed (AAN) hip exoskeleton control with human-in-the-loop optimization is proposed to correct gait asymmetry in hemiplegic gait. To realize the AAN concept, an objective function was designed for real-time evaluation of the subject's gait performance and active participation, which considers the variability of natural human movement and guides the online tuning of control parameters on a subject-specific basis. In this way, subjects were stimulated to contribute as much as possible to movement, thus maximizing the efficiency and outcomes of gait rehabilitation. Finally, an experimental study was conducted to verify the feasibility of the proposed control with simulated hemiplegic gait, and the common hypothesis that AAN controls can improve active human participation was clearly validated from a biomechanics perspective.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146113080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1109/TBME.2026.3660309
Kaichuan Yang, Chengyi Liu, Shuicai Wu
Objective: Accurate segmentation of heart sound signal stages is critical in cardiovascular disease analysis.
Methods: This study proposed the integration of a duration hidden Markov model (DHMM) with a temporal convolutional network (TCN) and an adaptive calibration mechanism (based on electrocardiogram signals) to enable the precise segmentation of complex heart sound signals. Multiple features of heart sound signals are extracted and utilized as model inputs, constructed a segmentation model architecture improved by TCN-based observation probability estimation and an attention mechanism integrated into the Viterbi algorithm.
Results: The experimental results demonstrated that the average accuracy of this method is 94.71 ± 2.64% at a segmentation error of 50ms. The enhanced Viterbi algorithm elevated performance by approximately 9 percentage points. Furthermore, the adaptive calibration mechanism yielded an additional average accuracy increase of 1.41 percentage points and reduced the standard deviation by 1.21 percentage points. Conclusion: Compared to traditional methods employing Gaussian distribution-based observation probability estimation, the utilization of a TCN substantially enhanced state discrimination accuracy, achieving an improvement of approximately 3 percentage points. The refined Viterbi algorithm demonstrated superior performance relative to prior methodologies.
Significance: This method enables effective segmentation of complex heart sound data, delivering a high-precision solution for the automated analysis of heart sounds. Our code can be found in https://github.com/KC-Y-bjut/Heart-sound-segmentation.
{"title":"Accurate Heart Sound Segmentation with Temporal Convolutional Network-Enhanced Duration Hidden Markov Model and Adaptive Calibration.","authors":"Kaichuan Yang, Chengyi Liu, Shuicai Wu","doi":"10.1109/TBME.2026.3660309","DOIUrl":"https://doi.org/10.1109/TBME.2026.3660309","url":null,"abstract":"<p><strong>Objective: </strong>Accurate segmentation of heart sound signal stages is critical in cardiovascular disease analysis.</p><p><strong>Methods: </strong>This study proposed the integration of a duration hidden Markov model (DHMM) with a temporal convolutional network (TCN) and an adaptive calibration mechanism (based on electrocardiogram signals) to enable the precise segmentation of complex heart sound signals. Multiple features of heart sound signals are extracted and utilized as model inputs, constructed a segmentation model architecture improved by TCN-based observation probability estimation and an attention mechanism integrated into the Viterbi algorithm.</p><p><strong>Results: </strong>The experimental results demonstrated that the average accuracy of this method is 94.71 ± 2.64% at a segmentation error of 50ms. The enhanced Viterbi algorithm elevated performance by approximately 9 percentage points. Furthermore, the adaptive calibration mechanism yielded an additional average accuracy increase of 1.41 percentage points and reduced the standard deviation by 1.21 percentage points. Conclusion: Compared to traditional methods employing Gaussian distribution-based observation probability estimation, the utilization of a TCN substantially enhanced state discrimination accuracy, achieving an improvement of approximately 3 percentage points. The refined Viterbi algorithm demonstrated superior performance relative to prior methodologies.</p><p><strong>Significance: </strong>This method enables effective segmentation of complex heart sound data, delivering a high-precision solution for the automated analysis of heart sounds. Our code can be found in https://github.com/KC-Y-bjut/Heart-sound-segmentation.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146105254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}