EarPPG: Securing Your Identity with Your Ears

Seokmin Choi, Junghwan Yim, Yincheng Jin, Yang Gao, Jiyang Li, Zhanpeng Jin
{"title":"EarPPG: Securing Your Identity with Your Ears","authors":"Seokmin Choi, Junghwan Yim, Yincheng Jin, Yang Gao, Jiyang Li, Zhanpeng Jin","doi":"10.1145/3581641.3584070","DOIUrl":null,"url":null,"abstract":"Wearable devices have become indispensable gadgets in people’s daily lives nowadays; especially wireless earphones have experienced unprecedented growth in recent years, which lead to increasing interest and explorations of user authentication techniques. Conventional user authentication methods embedded in wireless earphones that use microphones or other modalities are vulnerable to environmental factors, such as loud noises or occlusions. To address this limitation, we introduce EarPPG, a new biometric modality that takes advantage of the unique in-ear photoplethysmography (PPG) signals, altered by a user’s unique speaking behaviors. When the user is speaking, muscle movements cause changes in the blood vessel geometry, inducing unique PPG signal variations. As speaking behaviors and PPG signals are unique, the EarPPG combines both biometric traits and presents a secure and obscure authentication solution. The system first detects and segments EarPPG signals and proceeds to extract effective features to construct a user authentication model with the 1D ReGRU network. We conducted comprehensive real-world evaluations with 25 human participants and achieved 94.84% accuracy, 0.95 precision, recall, and f1-score, respectively. Moreover, considering the practical implications, we conducted several extensive in-the-wild experiments, including body motions, occlusions, lighting, and permanence. Overall outcomes of this study possess the potential to be embedded in future smart earable devices.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 28th International Conference on Intelligent User Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3581641.3584070","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Wearable devices have become indispensable gadgets in people’s daily lives nowadays; especially wireless earphones have experienced unprecedented growth in recent years, which lead to increasing interest and explorations of user authentication techniques. Conventional user authentication methods embedded in wireless earphones that use microphones or other modalities are vulnerable to environmental factors, such as loud noises or occlusions. To address this limitation, we introduce EarPPG, a new biometric modality that takes advantage of the unique in-ear photoplethysmography (PPG) signals, altered by a user’s unique speaking behaviors. When the user is speaking, muscle movements cause changes in the blood vessel geometry, inducing unique PPG signal variations. As speaking behaviors and PPG signals are unique, the EarPPG combines both biometric traits and presents a secure and obscure authentication solution. The system first detects and segments EarPPG signals and proceeds to extract effective features to construct a user authentication model with the 1D ReGRU network. We conducted comprehensive real-world evaluations with 25 human participants and achieved 94.84% accuracy, 0.95 precision, recall, and f1-score, respectively. Moreover, considering the practical implications, we conducted several extensive in-the-wild experiments, including body motions, occlusions, lighting, and permanence. Overall outcomes of this study possess the potential to be embedded in future smart earable devices.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
EarPPG:用耳朵保护你的身份
如今,可穿戴设备已经成为人们日常生活中不可或缺的小工具;尤其是无线耳机,近年来有了前所未有的发展,引起了人们对用户认证技术的兴趣和探索。嵌入在使用麦克风或其他模式的无线耳机中的传统用户身份验证方法容易受到环境因素的影响,例如大声噪音或闭塞。为了解决这一限制,我们引入了EarPPG,这是一种新的生物识别模式,利用独特的耳内光电体积脉搏波(PPG)信号,由用户独特的说话行为改变。当使用者说话时,肌肉运动引起血管几何形状的变化,引起独特的PPG信号变化。由于说话行为和PPG信号都是独一无二的,EarPPG结合了生物特征,提供了一种安全而模糊的认证解决方案。该系统首先对EarPPG信号进行检测和分割,提取有效特征,利用1D ReGRU网络构建用户认证模型。我们对25名人类参与者进行了全面的真实世界评估,分别达到了94.84%的准确率、0.95的准确率、召回率和f1分。此外,考虑到实际意义,我们进行了几个广泛的野外实验,包括身体运动、遮挡、照明和持久性。这项研究的总体结果有可能嵌入到未来的智能耳机设备中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Interactive User Interface for Dialogue Summarization Human-Centered Deferred Inference: Measuring User Interactions and Setting Deferral Criteria for Human-AI Teams Drawing with Reframer: Emergence and Control in Co-Creative AI Don’t fail me! The Level 5 Autonomous Driving Information Dilemma regarding Transparency and User Experience It Seems Smart, but It Acts Stupid: Development of Trust in AI Advice in a Repeated Legal Decision-Making Task
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1