IndexPen: Two-Finger Text Input with Millimeter-Wave Radar

Hao‐Shun Wei, Worcester, Li Ziheng, Alexander D. Galvan, SU Zhuoran, Xiao Zhang, E. Solovey, Hao‐Shun Wei, Ziheng Li, Alexander D. Galvan, Zhuoran Su, Xiao Zhang, K. Pahlavan
{"title":"IndexPen: Two-Finger Text Input with Millimeter-Wave Radar","authors":"Hao‐Shun Wei, Worcester, Li Ziheng, Alexander D. Galvan, SU Zhuoran, Xiao Zhang, E. Solovey, Hao‐Shun Wei, Ziheng Li, Alexander D. Galvan, Zhuoran Su, Xiao Zhang, K. Pahlavan","doi":"10.1145/3534601","DOIUrl":null,"url":null,"abstract":"In this paper, we introduce IndexPen , a novel interaction technique for text input through two-finger in-air micro-gestures, enabling touch-free, effortless, tracking-based interaction, designed to mirror real-world writing. Our system is based on millimeter-wave radar sensing, and does not require instrumentation on the user. IndexPen can successfully identify 30 distinct gestures, representing the letters A-Z , as well as Space , Backspace , Enter , and a special Activation gesture to prevent unintentional input. Additionally, we include a noise class to differentiate gesture and non-gesture noise. We present our system design, including the radio frequency (RF) processing pipeline, classification model, and real-time detection algorithms. We further demonstrate our proof-of-concept system with data collected over ten days with five participants yielding 95.89% cross-validation accuracy on 31 classes (including noise ). Moreover, we explore the learnability and adaptability of our system for real-world text input with 16 participants who are first-time users to IndexPen over five sessions. After each session, the pre-trained model from the previous five-user study is calibrated on the data collected so far for a new user through transfer learning. The F-1 score showed an average increase of 9.14% per session with the calibration, reaching an average of 88.3% on the last session across the 16 users. Meanwhile, we show that the users can type sentences with IndexPen at 86.2% accuracy, measured by string similarity. This work builds a foundation and vision for future interaction interfaces that could be enabled with this paradigm.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3534601","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

In this paper, we introduce IndexPen , a novel interaction technique for text input through two-finger in-air micro-gestures, enabling touch-free, effortless, tracking-based interaction, designed to mirror real-world writing. Our system is based on millimeter-wave radar sensing, and does not require instrumentation on the user. IndexPen can successfully identify 30 distinct gestures, representing the letters A-Z , as well as Space , Backspace , Enter , and a special Activation gesture to prevent unintentional input. Additionally, we include a noise class to differentiate gesture and non-gesture noise. We present our system design, including the radio frequency (RF) processing pipeline, classification model, and real-time detection algorithms. We further demonstrate our proof-of-concept system with data collected over ten days with five participants yielding 95.89% cross-validation accuracy on 31 classes (including noise ). Moreover, we explore the learnability and adaptability of our system for real-world text input with 16 participants who are first-time users to IndexPen over five sessions. After each session, the pre-trained model from the previous five-user study is calibrated on the data collected so far for a new user through transfer learning. The F-1 score showed an average increase of 9.14% per session with the calibration, reaching an average of 88.3% on the last session across the 16 users. Meanwhile, we show that the users can type sentences with IndexPen at 86.2% accuracy, measured by string similarity. This work builds a foundation and vision for future interaction interfaces that could be enabled with this paradigm.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
IndexPen:两指文本输入与毫米波雷达
在本文中,我们介绍了IndexPen,这是一种新颖的交互技术,通过两指空中微手势进行文本输入,实现无触摸,轻松,基于跟踪的交互,旨在反映现实世界的书写。我们的系统是基于毫米波雷达传感,不需要对用户的仪器。IndexPen可以成功识别30种不同的手势,代表字母a - z,以及空格、退格、Enter和一个特殊的激活手势,以防止无意的输入。此外,我们还包括一个噪声类来区分手势和非手势噪声。我们介绍了我们的系统设计,包括射频(RF)处理管道,分类模型和实时检测算法。我们进一步展示了我们的概念验证系统,在10天内收集了5名参与者的数据,在31个类别(包括噪声)上产生了95.89%的交叉验证准确率。此外,我们探索了我们的系统对现实世界文本输入的可学习性和适应性,16名参与者是第一次使用IndexPen的用户。每次会话结束后,通过迁移学习,根据到目前为止为新用户收集的数据对先前五个用户研究的预训练模型进行校准。经校正后,F-1评分平均每录得9.14%的升幅,最后一次录得的平均升幅为88.3%。与此同时,我们表明用户使用IndexPen输入句子的准确率为86.2%,以字符串相似度来衡量。这项工作为未来的交互界面构建了一个基础和远景,这些交互界面可以使用这个范例来实现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Multi-Subject 3D Human Mesh Construction Using Commodity WiFi UHead: Driver Attention Monitoring System Using UWB Radar DeltaLCA: Comparative Life-Cycle Assessment for Electronics Design Multimodal Daily-Life Logging in Free-living Environment Using Non-Visual Egocentric Sensors on a Smartphone Lateralization Effects in Electrodermal Activity Data Collected Using Wearable Devices
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1