Pub Date : 2026-01-19DOI: 10.1088/1361-6579/ae2562
Márton Á Goda, Helen Badge, Jasmeen Khan, Yosef Solewicz, Moran Davoodi, Rumbidzai Teramayi, Dennis Cordato, Longting Lin, Lauren Christie, Christopher Blair, Gagan Sharma, Mark Parsons, Joachim A Behar
Objective.Large vessel occlusion (LVO) stroke presents a major challenge in clinical practice due to the potential for poor outcomes with delayed treatment. Treatment for LVO involves highly specialized care, in particular endovascular thrombectomy, and is available only at certain hospitals. Therefore, prehospital identification of LVO by emergency ambulance services, can be critical for triaging LVO stroke patients directly to a hospital with access to endovascular therapy. Clinical scores exist to help distinguish LVO from less severe strokes, but they are based on a series of examinations that can be time-consuming and may be impractical for patients with dementia or those who cannot follow commands due to their stroke. There is a need for a fast and reliable method to aid in the early identification of LVO. In this study, our objective was to assess the feasibility of using 30 s photoplethysmography (PPG) recording to assist in recognizing LVO stroke.Approach.A total of 88 patients, including 25 with LVO, 27 with stroke mimic (SM), and 36 non-LVO stroke patients (NL), were recorded at the Liverpool Hospital emergency department in Sydney, Australia. Demographics (age, sex), as well as morphological features and beating rate variability measures, were extracted from the PPG. A binary classification approach was employed to differentiate between LVO stroke and NL + SM (NL.SM). A 2:1 train-test split was stratified and repeated randomly across 100 iterations.Main results.The best model achieved a median test set area under the receiver operating characteristic curve of 0.77 (0.71-0.82).Significance.Our study demonstrates the potential of utilizing a 30 s PPG recording for identifying LVO stroke.
{"title":"Machine learning for triage of strokes with large vessel occlusion using photoplethysmography biomarkers.","authors":"Márton Á Goda, Helen Badge, Jasmeen Khan, Yosef Solewicz, Moran Davoodi, Rumbidzai Teramayi, Dennis Cordato, Longting Lin, Lauren Christie, Christopher Blair, Gagan Sharma, Mark Parsons, Joachim A Behar","doi":"10.1088/1361-6579/ae2562","DOIUrl":"10.1088/1361-6579/ae2562","url":null,"abstract":"<p><p><i>Objective.</i>Large vessel occlusion (LVO) stroke presents a major challenge in clinical practice due to the potential for poor outcomes with delayed treatment. Treatment for LVO involves highly specialized care, in particular endovascular thrombectomy, and is available only at certain hospitals. Therefore, prehospital identification of LVO by emergency ambulance services, can be critical for triaging LVO stroke patients directly to a hospital with access to endovascular therapy. Clinical scores exist to help distinguish LVO from less severe strokes, but they are based on a series of examinations that can be time-consuming and may be impractical for patients with dementia or those who cannot follow commands due to their stroke. There is a need for a fast and reliable method to aid in the early identification of LVO. In this study, our objective was to assess the feasibility of using 30 s photoplethysmography (PPG) recording to assist in recognizing LVO stroke.<i>Approach.</i>A total of 88 patients, including 25 with LVO, 27 with stroke mimic (SM), and 36 non-LVO stroke patients (NL), were recorded at the Liverpool Hospital emergency department in Sydney, Australia. Demographics (age, sex), as well as morphological features and beating rate variability measures, were extracted from the PPG. A binary classification approach was employed to differentiate between LVO stroke and NL + SM (NL.SM). A 2:1 train-test split was stratified and repeated randomly across 100 iterations.<i>Main results.</i>The best model achieved a median test set area under the receiver operating characteristic curve of 0.77 (0.71-0.82).<i>Significance.</i>Our study demonstrates the potential of utilizing a 30 s PPG recording for identifying LVO stroke.</p>","PeriodicalId":20047,"journal":{"name":"Physiological measurement","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145637617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14DOI: 10.1088/1361-6579/ae3365
S Likitalo, A Anzanpour, A Axelin, T Jaako, P Celka
Objective. Fetal and maternal health during pregnancy can be monitored with sensors such as Doppler or scalp fetal ECG. This study focuses on single-channel dry electrode maternal abdominal ECG (aECG) to extract fetal heart rate (fHR) using a low-complexity algorithm suitable for low-power wearables.Approach. A hybrid model combining machine learning, QRS masking, and data fusion was trained on two PhysioNet databases and synthetically generatedaECG. Model selection employed the Akaike criterion with data balancing and random sampling.Main results. The algorithm was tested on 80 recordings from the Computer in Cardiology Challenge 2013 (CCC) and the abdominal and direct fetal database (ADFD), augmented with 100 syntheticaECG. Performance for fetal QRS detection reachedPrecision=97.2(82.2)%,Specificity=99.8(93.8)%, andSensitivity=97.4(93.9)% on ADFD and CCC, respectively. Clinical validation used the Polar Electro Oy H10 dry-electrode device at the Maternity Hospital of Southwest Finland. Four subjects (gestational age39.8±1.3 weeks) were analyzed, with seven discarded. ForfHR, the mean absolute percentage error was1.9±1.0%, Availability79.6±3.9%, and coverage probabilityCP5=76.2%,CP10=87.5%.Significance. These results demonstrate the feasibility offHRmonitoring from dry-electrodeaECGtailored for low-power wearables. Signal quality in clinical subjects matched the lowest PhysioNet cases, confirming robustness under low signal-to-noise conditions.
目标。怀孕期间,胎儿和母亲的健康可以通过多普勒或头皮胎儿心电图等传感器进行监测。本研究针对单通道干电极孕妇腹部心电图(aECG),采用一种适合低功耗可穿戴设备的低复杂度算法提取胎儿心率(fHR)。将机器学习、QRS掩蔽和数据融合相结合的混合模型在两个PhysioNet数据库上进行训练,并综合生成心电。模型选择采用数据均衡和随机抽样的赤池准则。主要的结果。该算法在来自2013年心脏病学计算机挑战赛(CCC)和腹部和直接胎儿数据库(ADFD)的80条记录上进行了测试,并辅以100条合成心电图。胎儿QRS检测在ADFD和CCC上的精密度为97.2(82.2)%,特异度为99.8(93.8)%,灵敏度为97.4(93.9)%。临床验证使用Polar Electro Oy H10干电极装置在芬兰西南部妇产医院。分析4例(胎龄39.8±1.3周),丢弃7例。对于hr,平均绝对误差为1.9±1.0%,可用性为79.6±3.9%,覆盖率cp5 =76.2%,CP10=87.5%。这些结果证明了为低功耗可穿戴设备量身定制的干电极监测心率的可行性。临床受试者的信号质量与最低的PhysioNet病例相匹配,证实了在低信噪比条件下的稳健性。
{"title":"Low-complexity fetal heart rate monitoring from carbon-based single-channel dry electrodes maternal electrocardiogram.","authors":"S Likitalo, A Anzanpour, A Axelin, T Jaako, P Celka","doi":"10.1088/1361-6579/ae3365","DOIUrl":"https://doi.org/10.1088/1361-6579/ae3365","url":null,"abstract":"<p><p><i>Objective</i>. Fetal and maternal health during pregnancy can be monitored with sensors such as Doppler or scalp fetal ECG. This study focuses on single-channel dry electrode maternal abdominal ECG (<i>aECG</i>) to extract fetal heart rate (<i>fHR</i>) using a low-complexity algorithm suitable for low-power wearables.<i>Approach</i>. A hybrid model combining machine learning, QRS masking, and data fusion was trained on two PhysioNet databases and synthetically generated<i>aECG</i>. Model selection employed the Akaike criterion with data balancing and random sampling.<i>Main results</i>. The algorithm was tested on 80 recordings from the Computer in Cardiology Challenge 2013 (CCC) and the abdominal and direct fetal database (ADFD), augmented with 100 synthetic<i>aECG</i>. Performance for fetal QRS detection reachedPrecision=97.2(82.2)%,Specificity=99.8(93.8)%, andSensitivity=97.4(93.9)% on ADFD and CCC, respectively. Clinical validation used the Polar Electro Oy H10 dry-electrode device at the Maternity Hospital of Southwest Finland. Four subjects (gestational age39.8±1.3 weeks) were analyzed, with seven discarded. For<i>fHR</i>, the mean absolute percentage error was1.9±1.0%, Availability79.6±3.9%, and coverage probabilityCP5=76.2%,CP10=87.5%.<i>Significance</i>. These results demonstrate the feasibility of<i>fHR</i>monitoring from dry-electrode<i>aECG</i>tailored for low-power wearables. Signal quality in clinical subjects matched the lowest PhysioNet cases, confirming robustness under low signal-to-noise conditions.</p>","PeriodicalId":20047,"journal":{"name":"Physiological measurement","volume":"47 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145966634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-13DOI: 10.1088/1361-6579/ae3357
Wei Zhong, Ruiwen Li, Xin Yu
Objective.the fetal electrocardiogram (FECG) is critical for monitoring fetal health, however, its extraction remains technically challenging due to strong interference from the maternal electrocardiogram (MECG) in abdominal electrocardiogram (AECG). Therefore, an attention-based generative adversarial network (AGAN) is proposed for source separation of FECG from single-lead AECG signals.Approach.the AGAN architecture uniquely combines two powerful techniques: GAN-style adversarial training for high-quality data generation and attention-based focus mechanisms for intelligent feature selection, leading to superior target signal extraction from complex mixtures. The innovation of the proposed method lies in addressing the amplitude bias issue in multi-objective learning tasks. This work innovatively employs the Hadamard product as the learning objective for the model, preventing the model from favoring high-amplitude components (e.g. MECG) while neglecting low-amplitude yet critical features (e.g. FECG).Main results.experimental results demonstrate that the proposed method can effectively and simultaneously separate both MECG and FECG components from single-lead AECG signals. When evaluated on the ADFECGDB, B2_LABOUR, and PCDB datasets, the proposed method demonstrated consistent performance, achieving the following SE, PPV, andF1 scores: 96.67%, 97.13%, and 96.90% on ADFECGDB; 95.90%, 96.56%, and 96.22% on B2_LABOUR; and 94.96%, 95.18%, and 95.06% on PCDB.Significance.this study presents a robust method for FECG extraction while simultaneously introducing an innovative data-driven framework for blind source separation problems.
{"title":"Deep source separation for single-channel fetal ECG extraction.","authors":"Wei Zhong, Ruiwen Li, Xin Yu","doi":"10.1088/1361-6579/ae3357","DOIUrl":"10.1088/1361-6579/ae3357","url":null,"abstract":"<p><p><i>Objective.</i>the fetal electrocardiogram (FECG) is critical for monitoring fetal health, however, its extraction remains technically challenging due to strong interference from the maternal electrocardiogram (MECG) in abdominal electrocardiogram (AECG). Therefore, an attention-based generative adversarial network (AGAN) is proposed for source separation of FECG from single-lead AECG signals.<i>Approach.</i>the AGAN architecture uniquely combines two powerful techniques: GAN-style adversarial training for high-quality data generation and attention-based focus mechanisms for intelligent feature selection, leading to superior target signal extraction from complex mixtures. The innovation of the proposed method lies in addressing the amplitude bias issue in multi-objective learning tasks. This work innovatively employs the Hadamard product as the learning objective for the model, preventing the model from favoring high-amplitude components (e.g. MECG) while neglecting low-amplitude yet critical features (e.g. FECG).<i>Main results.</i>experimental results demonstrate that the proposed method can effectively and simultaneously separate both MECG and FECG components from single-lead AECG signals. When evaluated on the ADFECGDB, B2_LABOUR, and PCDB datasets, the proposed method demonstrated consistent performance, achieving the following SE, PPV, and<i>F</i>1 scores: 96.67%, 97.13%, and 96.90% on ADFECGDB; 95.90%, 96.56%, and 96.22% on B2_LABOUR; and 94.96%, 95.18%, and 95.06% on PCDB.<i>Significance.</i>this study presents a robust method for FECG extraction while simultaneously introducing an innovative data-driven framework for blind source separation problems.</p>","PeriodicalId":20047,"journal":{"name":"Physiological measurement","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145906356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1088/1361-6579/ae2f8a
Yuxuan Wu, Jijun Tong, Pengjia Qi
Objective.Cardiovascular disease (CVD) causes severe global health threat, and electrocardiogram (ECG) is crucial for early CVD diagnosis. Recently, two popular deep learning methods, that is, convolutional neural network (CNN) and long short-term memory (LSTM) network are studied for ECG modeling and CVD diagnosis, but CNN adopts fixed kernels, thereby reducing efficiency and introducing noise, and LSTM struggles with local feature correlations.Approach.This study proposes an adaptive CNN-LSTM (aCNN-LSTM) fusion network for ECG diagnosis. An adaptive convolutional kernel is newly designed, which can dynamically adjust size based on local signal variance. Smaller kernels optimize efficiency in stationary segments, while larger kernels extract diverse features in non-stationary regions. The adaptive features from aCNN are further fed into LSTM to capture temporal relationships. Finally, a spatial-temporal fusion mechanism is used and a multi-class classification is achieved via the output layer.Main results.Experiments on the PTB-XL dataset show that the proposed aCNN-LSTM net outperforms CNN, LSTM, and CNN-LSTM in diagnosis performance: its overall accuracy reaches 89.89%, macro-averageF1-score is 0.9640, and weighted-averageF1-score is 0.9698.Significance.This method enhances the efficiency and accuracy of automatic ECG diagnosis, and provides reliable technical support for early CVD screening in clinical and primary medical settings.
{"title":"A novel adaptive CNN-LSTM fusion network for electrocardiogram diagnosis.","authors":"Yuxuan Wu, Jijun Tong, Pengjia Qi","doi":"10.1088/1361-6579/ae2f8a","DOIUrl":"10.1088/1361-6579/ae2f8a","url":null,"abstract":"<p><p><i>Objective.</i>Cardiovascular disease (CVD) causes severe global health threat, and electrocardiogram (ECG) is crucial for early CVD diagnosis. Recently, two popular deep learning methods, that is, convolutional neural network (CNN) and long short-term memory (LSTM) network are studied for ECG modeling and CVD diagnosis, but CNN adopts fixed kernels, thereby reducing efficiency and introducing noise, and LSTM struggles with local feature correlations.<i>Approach.</i>This study proposes an adaptive CNN-LSTM (aCNN-LSTM) fusion network for ECG diagnosis. An adaptive convolutional kernel is newly designed, which can dynamically adjust size based on local signal variance. Smaller kernels optimize efficiency in stationary segments, while larger kernels extract diverse features in non-stationary regions. The adaptive features from aCNN are further fed into LSTM to capture temporal relationships. Finally, a spatial-temporal fusion mechanism is used and a multi-class classification is achieved via the output layer.<i>Main results.</i>Experiments on the PTB-XL dataset show that the proposed aCNN-LSTM net outperforms CNN, LSTM, and CNN-LSTM in diagnosis performance: its overall accuracy reaches 89.89%, macro-average<i>F</i>1-score is 0.9640, and weighted-average<i>F</i>1-score is 0.9698.<i>Significance.</i>This method enhances the efficiency and accuracy of automatic ECG diagnosis, and provides reliable technical support for early CVD screening in clinical and primary medical settings.</p>","PeriodicalId":20047,"journal":{"name":"Physiological measurement","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145794374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-06DOI: 10.1088/1361-6579/ae2f18
Feng Wang, Lian Yan, Jia Xu, Mingxin Qin, Jian Sun, Lin Xu, Wei Zhuang, Xu Ning, Gui Jin, Mingsheng Chen
Objective.Prompt identification of haematomas is crucial for effective clinical treatment. Magnetic induction phase shift technology (MIPS), known for its portability, non-contact nature, and affordability, is limited by the weak signal induced by cerebral hemorrhage leading to poor sensitivity, which is urgent to be improved.Approach. Tracer of magnetic nanoparticles is introduced to produce robust induced magnetic field. A symmetrical gradiometer coil is used as the receiving coil to nullify the effect of primary magnetic field generated by the excitation coil, which is designed as a Helmholtz coil.Main results.In vitroexperiments showcase the remarkably improved sensitivity and stability of the detection system, with magnetic nanoparticles notably boosting the MIPS signal for hemorrhage. Moreover,in vivoexperiments employing a rabbit autologous blood cerebral hemorrhage model reveal that with a hemorrhage volume of 2 ml, the experimental group with employed magnetic nanoparticles increased the MIPS signal change by 23-fold compared to the control group without magnetic nanoparticles.Significance. The sensitivity of MIPS for hemorrhage detection is significantly improved compared to traditional method. The magnetic nanoparticle-enhanced MIPS detection technique holds promise as an optimal solution for real-time, non-invasive bedside monitoring for cerebral hemorrhage.
{"title":"Research on hemorrhagic stroke detection enhanced by magnetic nanoparticle-based magnetic induction.","authors":"Feng Wang, Lian Yan, Jia Xu, Mingxin Qin, Jian Sun, Lin Xu, Wei Zhuang, Xu Ning, Gui Jin, Mingsheng Chen","doi":"10.1088/1361-6579/ae2f18","DOIUrl":"10.1088/1361-6579/ae2f18","url":null,"abstract":"<p><p><i>Objective.</i>Prompt identification of haematomas is crucial for effective clinical treatment. Magnetic induction phase shift technology (MIPS), known for its portability, non-contact nature, and affordability, is limited by the weak signal induced by cerebral hemorrhage leading to poor sensitivity, which is urgent to be improved.<i>Approach</i>. Tracer of magnetic nanoparticles is introduced to produce robust induced magnetic field. A symmetrical gradiometer coil is used as the receiving coil to nullify the effect of primary magnetic field generated by the excitation coil, which is designed as a Helmholtz coil.<i>Main results</i>.<i>In vitro</i>experiments showcase the remarkably improved sensitivity and stability of the detection system, with magnetic nanoparticles notably boosting the MIPS signal for hemorrhage. Moreover,<i>in vivo</i>experiments employing a rabbit autologous blood cerebral hemorrhage model reveal that with a hemorrhage volume of 2 ml, the experimental group with employed magnetic nanoparticles increased the MIPS signal change by 23-fold compared to the control group without magnetic nanoparticles.<i>Significance</i>. The sensitivity of MIPS for hemorrhage detection is significantly improved compared to traditional method. The magnetic nanoparticle-enhanced MIPS detection technique holds promise as an optimal solution for real-time, non-invasive bedside monitoring for cerebral hemorrhage.</p>","PeriodicalId":20047,"journal":{"name":"Physiological measurement","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-05DOI: 10.1088/1361-6579/ae2bbb
Leon Fesseler, Viktor Heinz, Henning Specks, Andreas Patzak, Dieter Blottner, Oliver Opatz, Niklas Pilz, Tomas L Bothe
Objective.Ultra-endurance cycling offers a natural laboratory for studying physiological responses under sustained extreme load. Continuous in-race monitoring is rarely reported. The aim of this study was to investigate the feasibility of a multimodal framework of physiological parameters including metabolic, cardiovascular, and muscle-mechanical patterns during an ultra-endurance event.Approach.This study stress-tests a multimodal framework of physiological parameters of a 58-year-old male athlete during the Race Across America (RAAM) 2024, covering 4933 km in 11 d from Oceanside, California, to Atlantic City, New Jersey. Parameters included energy expenditure, continuous blood glucose levels, heart rate, power output, passive muscle stiffness and resting tone, as well as sleep times.Main results.The multimodal monitoring toolkit proved feasible and provided continuous, physiological measurements throughout the RAAM, enabling the observation of the following physiological changes: The athlete lost 2.3 kg of total weight and had an estimated energy deficit of 21 169 kcal. Blood glucose levels decreased over the course of the RAAM (0.92 mg dl-1d-1,p< 0.001), with an increased time spent below 100 mg dl-1(p< 0.001). Heart rate during cycling progressively decreased, stabilising at a plateau of 94 bpm. Power output-to-heart rate ratio initially dropped until day 7 before peaking on day 11. Mean passive muscle stiffness and resting tone increased during the race compared to baseline levels, with distinct response patterns observed between two leg muscles and one lower back muscle. The total sleep deficit was 65 h during the RAAM.Significance.Continuous, multimodal in-race physiological monitoring during the RAAM proved feasible and operationally useful, enabling real-time adjustments to pacing, fuelling and recovery. This framework offers a field-deployable template for ultra-endurance events. Future research should focus on larger, multi-participant studies and long-term follow-up to characterise the physiological responses to extreme endurance.
目的:超耐力自行车为研究持续极端负荷下的生理反应提供了一个天然实验室。持续的竞态监测很少被报道。本研究的目的是探讨在超耐力赛事中代谢、心血管和肌肉-机械模式等生理参数的多模式框架的可行性。方法:本研究对一名58岁的男性运动员在2024年横穿美国(RAAM)比赛期间的生理参数的多模式框架进行了压力测试,该比赛从加利福尼亚州的Oceanside到新泽西州的大西洋城,耗时11天,全程4933公里。参数包括能量消耗、连续血糖水平、心率、功率输出、被动肌肉僵硬度和静息张力以及睡眠时间。主要结果:多模式监测工具包被证明是可行的,并在整个RAAM期间提供连续的生理测量,可以观察到以下生理变化:该运动员的总体重减少了2.3公斤,估计能量赤字为21,169千卡,血糖水平在RAAM过程中下降(0.92 mg/dl/d, p < 0.001),低于100 mg/dl的时间增加(p < 0.001)。在循环过程中心率逐渐下降,稳定在94 bpm的平台。功率输出与心率比最初下降到第7天,然后在第11天达到峰值。与基线水平相比,平均被动肌肉僵硬度和静息张力在比赛期间增加,在两条腿部肌肉和一条下背部肌肉之间观察到明显的反应模式。在RAAM期间,总睡眠不足为65小时。意义:在RAAM期间,连续、多模式的比赛生理监测被证明是可行的,在操作上是有用的,可以实时调整起搏、加油和恢复。该框架为超耐力赛事提供了一个可现场部署的模板。未来的研究应该集中在更大的、多参与者的研究和长期随访上,以表征极限耐力的生理反应。
{"title":"Continuous multimodal physiological monitoring during the Race Across America (RAAM) of a 58-year-old athlete.","authors":"Leon Fesseler, Viktor Heinz, Henning Specks, Andreas Patzak, Dieter Blottner, Oliver Opatz, Niklas Pilz, Tomas L Bothe","doi":"10.1088/1361-6579/ae2bbb","DOIUrl":"10.1088/1361-6579/ae2bbb","url":null,"abstract":"<p><p><i>Objective.</i>Ultra-endurance cycling offers a natural laboratory for studying physiological responses under sustained extreme load. Continuous in-race monitoring is rarely reported. The aim of this study was to investigate the feasibility of a multimodal framework of physiological parameters including metabolic, cardiovascular, and muscle-mechanical patterns during an ultra-endurance event.<i>Approach.</i>This study stress-tests a multimodal framework of physiological parameters of a 58-year-old male athlete during the Race Across America (RAAM) 2024, covering 4933 km in 11 d from Oceanside, California, to Atlantic City, New Jersey. Parameters included energy expenditure, continuous blood glucose levels, heart rate, power output, passive muscle stiffness and resting tone, as well as sleep times.<i>Main results.</i>The multimodal monitoring toolkit proved feasible and provided continuous, physiological measurements throughout the RAAM, enabling the observation of the following physiological changes: The athlete lost 2.3 kg of total weight and had an estimated energy deficit of 21 169 kcal. Blood glucose levels decreased over the course of the RAAM (0.92 mg dl<sup>-1</sup>d<sup>-1</sup>,<i>p</i>< 0.001), with an increased time spent below 100 mg dl<sup>-1</sup>(<i>p</i>< 0.001). Heart rate during cycling progressively decreased, stabilising at a plateau of 94 bpm. Power output-to-heart rate ratio initially dropped until day 7 before peaking on day 11. Mean passive muscle stiffness and resting tone increased during the race compared to baseline levels, with distinct response patterns observed between two leg muscles and one lower back muscle. The total sleep deficit was 65 h during the RAAM.<i>Significance.</i>Continuous, multimodal in-race physiological monitoring during the RAAM proved feasible and operationally useful, enabling real-time adjustments to pacing, fuelling and recovery. This framework offers a field-deployable template for ultra-endurance events. Future research should focus on larger, multi-participant studies and long-term follow-up to characterise the physiological responses to extreme endurance.</p>","PeriodicalId":20047,"journal":{"name":"Physiological measurement","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145743837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1088/1361-6579/ae2231
Jeremy Levy, Noam Ben-Moshe, Uri Shalit, Joachim A Behar
Objective.Deep learning for continuous physiological signals, such as electrocardiography or oximetry, has achieved remarkable success in supervised learning scenarios where training and testing data are drawn from the same distribution. However, when evaluating real-world applications, models often fail to generalize due to distribution shifts between the source domain on which the model was trained and the target domain where it is deployed. A common and particularly challenging shift often encountered in reality is where the source and target domain supports do not fully overlap. In this paper, we propose a novel framework, named Deep Unsupervised Domain adaptation using variable nEighbors (DUDE), to address this challenge.Approach.We introduce a new type of contrastive loss between the source and target domains using a dynamic neighbor selection strategy, in which the number of neighbors for each sample is adaptively determined based on the density observed in the latent space. We use multiple real-world datasets as source and target domains, with target domains that included demographics, ethnicities, geographies, and comorbidities that were not present in the source domain.Main results.The experimental results demonstrate superior DUDE performance compared to baselines and with an improvement of up to 16% over the original Nearest-Neighbor Contrastive Learning of Visual Representations strategy.Significance.Our contribution provides evidence on the potential of using DUDE to bridge the crucial gap of domain adaptation in medicine, potentially transforming patient care through more precise and adaptable diagnostic tools.
{"title":"DUDE: deep unsupervised domain adaptation using variable nEighbors for physiological time series analysis.","authors":"Jeremy Levy, Noam Ben-Moshe, Uri Shalit, Joachim A Behar","doi":"10.1088/1361-6579/ae2231","DOIUrl":"10.1088/1361-6579/ae2231","url":null,"abstract":"<p><p><i>Objective.</i>Deep learning for continuous physiological signals, such as electrocardiography or oximetry, has achieved remarkable success in supervised learning scenarios where training and testing data are drawn from the same distribution. However, when evaluating real-world applications, models often fail to generalize due to distribution shifts between the source domain on which the model was trained and the target domain where it is deployed. A common and particularly challenging shift often encountered in reality is where the source and target domain supports do not fully overlap. In this paper, we propose a novel framework, named Deep Unsupervised Domain adaptation using variable nEighbors (DUDE), to address this challenge.<i>Approach.</i>We introduce a new type of contrastive loss between the source and target domains using a dynamic neighbor selection strategy, in which the number of neighbors for each sample is adaptively determined based on the density observed in the latent space. We use multiple real-world datasets as source and target domains, with target domains that included demographics, ethnicities, geographies, and comorbidities that were not present in the source domain.<i>Main results.</i>The experimental results demonstrate superior DUDE performance compared to baselines and with an improvement of up to 16% over the original Nearest-Neighbor Contrastive Learning of Visual Representations strategy.<i>Significance.</i>Our contribution provides evidence on the potential of using DUDE to bridge the crucial gap of domain adaptation in medicine, potentially transforming patient care through more precise and adaptable diagnostic tools.</p>","PeriodicalId":20047,"journal":{"name":"Physiological measurement","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145564979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1088/1361-6579/ae06ee
Amy Edwards, Terry Fawden, Iwan Vaughan Roberts, Manohar Bance, Thomas Stone
Objective.Sit-to-stand (STS) and sit-to-walk (STW) movements are key functional tasks to master following lower limb amputation. They are core to activities of daily living, enabling patients to regain independence. Physiotherapists assess movement fluency (hesitation and smoothness) by observing STS and STW however, this relies on extensive experience and lacks objectivity. This study aimed to establish objective, accessible and scalable quantitative measurements of movement fluency in amputees using instrumented movement analysis.Approach.12 transfemoral amputees (six limited community and six community ambulators) and six typical individuals completed walking, STS and STW tasks. Movement fluency was assessed using published algorithms to obtain hesitation and smoothness in STS and STW.Main results.In STW, hesitation, and smoothness showed statistically significant differences among the three groups. Community ambulators were significantly less hesitant (p= 0.009) and smoother (p= 0.007) than the limited community ambulators, but significantly more hesitant (p< 0.001) and less smooth (p< 0.001) than typical individuals. In STS, the community ambulators were significantly smoother than the limited community ambulators (p< 0.001), but not significantly different from typical individuals (p= 0.68). Community ambulators walked significantly faster than limited community ambulators (p< 0.001) but significantly slower compared to typical individuals (p< 0.001).Significance.Assessment of movement after amputation is not just about walking speed. Other important functional tasks can differentiate amputees and therefore should be considered. An amputee must learn to master both the STS and STW tasks before they can independently walk. Quantifying movement fluency in functional tasks is important to understanding the restoration of function following limb loss, tracking rehabilitation, and classifying amputees. While the study's small sample size reflects its feasibility design, findings support future research with larger cohorts. Subsequent studies should incorporate power calculations to improve generalisability.
{"title":"Quantifying movement fluency in amputees in key functional tasks.","authors":"Amy Edwards, Terry Fawden, Iwan Vaughan Roberts, Manohar Bance, Thomas Stone","doi":"10.1088/1361-6579/ae06ee","DOIUrl":"10.1088/1361-6579/ae06ee","url":null,"abstract":"<p><p><i>Objective.</i>Sit-to-stand (STS) and sit-to-walk (STW) movements are key functional tasks to master following lower limb amputation. They are core to activities of daily living, enabling patients to regain independence. Physiotherapists assess movement fluency (hesitation and smoothness) by observing STS and STW however, this relies on extensive experience and lacks objectivity. This study aimed to establish objective, accessible and scalable quantitative measurements of movement fluency in amputees using instrumented movement analysis.<i>Approach.</i>12 transfemoral amputees (six limited community and six community ambulators) and six typical individuals completed walking, STS and STW tasks. Movement fluency was assessed using published algorithms to obtain hesitation and smoothness in STS and STW.<i>Main results.</i>In STW, hesitation, and smoothness showed statistically significant differences among the three groups. Community ambulators were significantly less hesitant (<i>p</i>= 0.009) and smoother (<i>p</i>= 0.007) than the limited community ambulators, but significantly more hesitant (<i>p</i>< 0.001) and less smooth (<i>p</i>< 0.001) than typical individuals. In STS, the community ambulators were significantly smoother than the limited community ambulators (<i>p</i>< 0.001), but not significantly different from typical individuals (<i>p</i>= 0.68). Community ambulators walked significantly faster than limited community ambulators (<i>p</i>< 0.001) but significantly slower compared to typical individuals (<i>p</i>< 0.001).<i>Significance.</i>Assessment of movement after amputation is not just about walking speed. Other important functional tasks can differentiate amputees and therefore should be considered. An amputee must learn to master both the STS and STW tasks before they can independently walk. Quantifying movement fluency in functional tasks is important to understanding the restoration of function following limb loss, tracking rehabilitation, and classifying amputees. While the study's small sample size reflects its feasibility design, findings support future research with larger cohorts. Subsequent studies should incorporate power calculations to improve generalisability.</p>","PeriodicalId":20047,"journal":{"name":"Physiological measurement","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145070174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1088/1361-6579/ae2c3c
Luca Cerina, Gabriele B Papini, Sebastiaan Overeem, Rik Vullings, Pedro Fonseca
Objective.In the analysis of obstructive sleep apnea (OSA), the main clinical index is the apnea-hypopnea index (AHI), or the average rate of respiratory events during sleep. This rate fluctuates during sleep, due to a variety of factors, such as sleep phases, body position, and other physiological mechanisms. Two people with the same AHI may manifest OSA may manifest OSA in drastically different ways. Therefore, a computed degree of statistical uncertainty alongside the average AHI would be a useful addition to a comprehensive sleep report-. In the current literature, the AHI uncertainty was modeled as a Poisson process and empirically estimated using bootstrap sampling of inter-event times (or intervals). However, we observed that long wake bouts, stochastic outliers in the intervals' distribution, and events' dispersion directly influence the bootstrap sampling, with either empirical over-estimation or theoretical under-estimation. In some cases, the result is a spurious empirical estimate of both AHI and its uncertainty. In others, a broad AHI uncertainty can be the correct description of the underlying process, and a Poisson model would be ill-fitted.Approach.We propose here three methods that improve the estimation of AHI uncertainty based on bootstrap sampling, making it more robust to the presence of spurious intervals caused by long wake bouts and events' overdispersion. We examine the violation of Poisson assumptions as the main cause of discrepancy between theoretical and empirical estimates, and propose the Negative Binomial distribution as an alternative model.Main results.Compared to the original Poisson-based method, we proved that the Negative Binomial can be a better theoretical model of uncertainty. Furthermore, our proposed methodology improved the estimation error of both AHI (up to 91% of the recordings) and the discrepancy with theoretical confidence intervals, in both Poisson and Negative Binomial models.Significance.This work provides notable improvements in the theoretical models of AHI uncertainty and in the robustness of empirical estimates.
{"title":"Estimation of apnea-hypopnea index uncertainty in the presence of long wake bouts and overdispersion.","authors":"Luca Cerina, Gabriele B Papini, Sebastiaan Overeem, Rik Vullings, Pedro Fonseca","doi":"10.1088/1361-6579/ae2c3c","DOIUrl":"10.1088/1361-6579/ae2c3c","url":null,"abstract":"<p><p><i>Objective.</i>In the analysis of obstructive sleep apnea (OSA), the main clinical index is the apnea-hypopnea index (AHI), or the average rate of respiratory events during sleep. This rate fluctuates during sleep, due to a variety of factors, such as sleep phases, body position, and other physiological mechanisms. Two people with the same AHI may manifest OSA may manifest OSA in drastically different ways. Therefore, a computed degree of statistical uncertainty alongside the average AHI would be a useful addition to a comprehensive sleep report-. In the current literature, the AHI uncertainty was modeled as a Poisson process and empirically estimated using bootstrap sampling of inter-event times (or intervals). However, we observed that long wake bouts, stochastic outliers in the intervals' distribution, and events' dispersion directly influence the bootstrap sampling, with either empirical over-estimation or theoretical under-estimation. In some cases, the result is a spurious empirical estimate of both AHI and its uncertainty. In others, a broad AHI uncertainty can be the correct description of the underlying process, and a Poisson model would be ill-fitted.<i>Approach.</i>We propose here three methods that improve the estimation of AHI uncertainty based on bootstrap sampling, making it more robust to the presence of spurious intervals caused by long wake bouts and events' overdispersion. We examine the violation of Poisson assumptions as the main cause of discrepancy between theoretical and empirical estimates, and propose the Negative Binomial distribution as an alternative model.<i>Main results.</i>Compared to the original Poisson-based method, we proved that the Negative Binomial can be a better theoretical model of uncertainty. Furthermore, our proposed methodology improved the estimation error of both AHI (up to 91% of the recordings) and the discrepancy with theoretical confidence intervals, in both Poisson and Negative Binomial models.<i>Significance.</i>This work provides notable improvements in the theoretical models of AHI uncertainty and in the robustness of empirical estimates.</p>","PeriodicalId":20047,"journal":{"name":"Physiological measurement","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145743758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-23DOI: 10.1088/1361-6579/ae2aa7
Ali Howidi, Ryan G L Koh, Niveetha Wijendran, Koosha Omidian, Krish Chhajer, Paul B Yoo
Objective.Hypertension is a leading cause of mortality worldwide, for which myriad treatment options are available. It is widely considered that continuous measurement of arterial blood pressure (BP) could improve the treatment of hypertension; however, chronically monitoring patient BP remains a significant challenge. In this study, we investigated a novel approach that uses an implantable electrode to generate an artifact signal for predicting arterial BP.Approach.In isoflurane anesthetized rats (n= 10, male), the right common carotid artery was instrumented with a multi-contact cuff electrode to acquire the artifact signal-termed the electro-vascular-gram (EVG) and the contralateral common carotid artery was catheterized to measure intra-arterial BP. The EVG signals were processed (e.g. extract Catch22 features) and applied to linear regression, random forest (RF) regressor, and convolutional neural network models to predict systolic and diastolic BP.Main results.Among the various models tested with the EVG data, the RF model + Catch22 features method achieved the highest performance, yielding predicted BP values (error < 5 mmHg) in 82.6%-100% and 84.1%-99.9% of the testing set for systolic and diastolic, respectively. A 5-fold cross-validation demonstrated similar performance by predicting BP values (error < 5 mmHg) in 91.5 ± 0.1% and 92.4 ± 0.1% of testing data for systolic and diastolic, respectively.Significance.This proof-of-concept study supports the feasibility of using an implantable electrode and machine learning models for potentially measuring arterial BP in continuous fashion. Further system development is warranted prior to clinical translation.
{"title":"An electrical pulse artifact signal for estimating arterial blood pressure: a proof-of-concept study.","authors":"Ali Howidi, Ryan G L Koh, Niveetha Wijendran, Koosha Omidian, Krish Chhajer, Paul B Yoo","doi":"10.1088/1361-6579/ae2aa7","DOIUrl":"10.1088/1361-6579/ae2aa7","url":null,"abstract":"<p><p><i>Objective.</i>Hypertension is a leading cause of mortality worldwide, for which myriad treatment options are available. It is widely considered that continuous measurement of arterial blood pressure (BP) could improve the treatment of hypertension; however, chronically monitoring patient BP remains a significant challenge. In this study, we investigated a novel approach that uses an implantable electrode to generate an artifact signal for predicting arterial BP.<i>Approach.</i>In isoflurane anesthetized rats (<i>n</i>= 10, male), the right common carotid artery was instrumented with a multi-contact cuff electrode to acquire the artifact signal-termed the electro-vascular-gram (EVG) and the contralateral common carotid artery was catheterized to measure intra-arterial BP. The EVG signals were processed (e.g. extract Catch22 features) and applied to linear regression, random forest (RF) regressor, and convolutional neural network models to predict systolic and diastolic BP.<i>Main results.</i>Among the various models tested with the EVG data, the RF model + Catch22 features method achieved the highest performance, yielding predicted BP values (error < 5 mmHg) in 82.6%-100% and 84.1%-99.9% of the testing set for systolic and diastolic, respectively. A 5-fold cross-validation demonstrated similar performance by predicting BP values (error < 5 mmHg) in 91.5 ± 0.1% and 92.4 ± 0.1% of testing data for systolic and diastolic, respectively.<i>Significance.</i>This proof-of-concept study supports the feasibility of using an implantable electrode and machine learning models for potentially measuring arterial BP in continuous fashion. Further system development is warranted prior to clinical translation.</p>","PeriodicalId":20047,"journal":{"name":"Physiological measurement","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145715121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}