Hearables: In-Ear Multimodal Data Fusion for Robust Heart Rate Estimation

Marek Żyliński, Amir Nassibi, Edoardo Occhipinti, Adil Malik, Matteo Bermond, H. Davies, Danilo P. Mandic
{"title":"Hearables: In-Ear Multimodal Data Fusion for Robust Heart Rate Estimation","authors":"Marek Żyliński, Amir Nassibi, Edoardo Occhipinti, Adil Malik, Matteo Bermond, H. Davies, Danilo P. Mandic","doi":"10.3390/biomedinformatics4020051","DOIUrl":null,"url":null,"abstract":"Background: Ambulatory heart rate (HR) monitors that acquire electrocardiogram (ECG) or/and photoplethysmographm (PPG) signals from the torso, wrists, or ears are notably less accurate in tasks associated with high levels of movement compared to clinical measurements. However, a reliable estimation of HR can be obtained through data fusion from different sensors. These methods are especially suitable for multimodal hearable devices, where heart rate can be tracked from different modalities, including electrical ECG, optical PPG, and sounds (heart tones). Combined information from different modalities can compensate for single source limitations. Methods: In this paper, we evaluate the possible application of data fusion methods in hearables. We assess data fusion for heart rate estimation from simultaneous in-ear ECG and in-ear PPG, recorded on ten subjects while performing 5-min sitting and walking tasks. Results: Our findings show that data fusion methods provide a similar level of mean absolute error as the best single-source heart rate estimation but with much lower intra-subject variability, especially during walking activities. Conclusion: We conclude that data fusion methods provide more robust HR estimation than a single cardiovascular signal. These methods can enhance the performance of wearable devices, especially multimodal hearables, in heart rate tracking during physical activity.","PeriodicalId":72394,"journal":{"name":"BioMedInformatics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BioMedInformatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/biomedinformatics4020051","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Ambulatory heart rate (HR) monitors that acquire electrocardiogram (ECG) or/and photoplethysmographm (PPG) signals from the torso, wrists, or ears are notably less accurate in tasks associated with high levels of movement compared to clinical measurements. However, a reliable estimation of HR can be obtained through data fusion from different sensors. These methods are especially suitable for multimodal hearable devices, where heart rate can be tracked from different modalities, including electrical ECG, optical PPG, and sounds (heart tones). Combined information from different modalities can compensate for single source limitations. Methods: In this paper, we evaluate the possible application of data fusion methods in hearables. We assess data fusion for heart rate estimation from simultaneous in-ear ECG and in-ear PPG, recorded on ten subjects while performing 5-min sitting and walking tasks. Results: Our findings show that data fusion methods provide a similar level of mean absolute error as the best single-source heart rate estimation but with much lower intra-subject variability, especially during walking activities. Conclusion: We conclude that data fusion methods provide more robust HR estimation than a single cardiovascular signal. These methods can enhance the performance of wearable devices, especially multimodal hearables, in heart rate tracking during physical activity.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Hearables:耳内多模态数据融合实现可靠的心率估计
背景:从躯干、手腕或耳朵获取心电图(ECG)或/和光敏血压计(PPG)信号的非卧姿心率(HR)监测仪,与临床测量相比,在与高运动相关的任务中准确性明显较低。不过,通过不同传感器的数据融合,可以获得可靠的心率估计值。这些方法尤其适用于多模态可听设备,在这些设备中,心率可通过不同模态进行跟踪,包括电子心电图、光学 PPG 和声音(心音)。来自不同模式的综合信息可以弥补单一来源的局限性。方法:本文评估了数据融合方法在可听设备中的可能应用。我们对 10 名受试者在完成 5 分钟坐姿和步行任务时同时记录的耳内心电图和耳内 PPG 的心率估算进行了数据融合评估。结果显示我们的研究结果表明,数据融合方法提供的平均绝对误差水平与最佳单源心率估算相似,但受试者内部的变异性要低得多,尤其是在步行活动中。结论我们得出结论,数据融合方法比单一心血管信号能提供更稳健的心率估计。这些方法可以提高可穿戴设备,尤其是多模态可听设备在体育活动期间心率跟踪方面的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
1.70
自引率
0.00%
发文量
0
期刊最新文献
Cinco de Bio: A Low-Code Platform for Domain-Specific Workflows for Biomedical Imaging Research Approaches to Extracting Patterns of Service Utilization for Patients with Complex Conditions: Graph Community Detection vs. Natural Language Processing Clustering Replies to Queries in Gynecologic Oncology by Bard, Bing and the Google Assistant Should AI-Powered Whole-Genome Sequencing Be Used Routinely for Personalized Decision Support in Surgical Oncology—A Scoping Review Transfer-Learning Approach for Enhanced Brain Tumor Classification in MRI Imaging
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1