OneFi:通过COTS WiFi一次性识别看不见的手势

Rui Xiao, Jianwei Liu, Jinsong Han, K. Ren
{"title":"OneFi:通过COTS WiFi一次性识别看不见的手势","authors":"Rui Xiao, Jianwei Liu, Jinsong Han, K. Ren","doi":"10.1145/3485730.3485936","DOIUrl":null,"url":null,"abstract":"WiFi-based Human Gesture Recognition (HGR) becomes increasingly promising for device-free human-computer interaction. However, existing WiFi-based approaches have not been ready for real-world deployment due to the limited scalability, especially for unseen gestures. The reason behind is that when introducing unseen gestures, prior works have to collect a large number of samples and re-train the model. While the recent advance of few-shot learning has brought new opportunities to solve this problem, the overhead has not been effectively reduced. This is because these methods still require enormous data to learn adequate prior knowledge, and their complicated training process intensifies the regular training cost. In this paper, we propose a WiFi-based HGR system, namely OneFi, which can recognize unseen gestures with only one (or few) labeled samples. OneFi fundamentally addresses the challenge of high overhead. On the one hand, OneFi utilizes a virtual gesture generation mechanism such that the massive efforts in prior works can be significantly alleviated in the data collection process. On the other hand, OneFi employs a lightweight one-shot learning framework based on transductive fine-tuning to eliminate model re-training. We additionally design a self-attention based backbone, termed as WiFi Transformer, to minimize the training cost of the proposed framework. We establish a real-world testbed using commodity WiFi devices and perform extensive experiments over it. The evaluation results show that OneFi can recognize unseen gestures with the accuracy of 84.2, 94.2, 95.8, and 98.8% when 1, 3, 5, 7 labeled samples are available, respectively, while the overall training process takes less than two minutes.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"33","resultStr":"{\"title\":\"OneFi: One-Shot Recognition for Unseen Gesture via COTS WiFi\",\"authors\":\"Rui Xiao, Jianwei Liu, Jinsong Han, K. Ren\",\"doi\":\"10.1145/3485730.3485936\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"WiFi-based Human Gesture Recognition (HGR) becomes increasingly promising for device-free human-computer interaction. However, existing WiFi-based approaches have not been ready for real-world deployment due to the limited scalability, especially for unseen gestures. The reason behind is that when introducing unseen gestures, prior works have to collect a large number of samples and re-train the model. While the recent advance of few-shot learning has brought new opportunities to solve this problem, the overhead has not been effectively reduced. This is because these methods still require enormous data to learn adequate prior knowledge, and their complicated training process intensifies the regular training cost. In this paper, we propose a WiFi-based HGR system, namely OneFi, which can recognize unseen gestures with only one (or few) labeled samples. OneFi fundamentally addresses the challenge of high overhead. On the one hand, OneFi utilizes a virtual gesture generation mechanism such that the massive efforts in prior works can be significantly alleviated in the data collection process. On the other hand, OneFi employs a lightweight one-shot learning framework based on transductive fine-tuning to eliminate model re-training. We additionally design a self-attention based backbone, termed as WiFi Transformer, to minimize the training cost of the proposed framework. We establish a real-world testbed using commodity WiFi devices and perform extensive experiments over it. The evaluation results show that OneFi can recognize unseen gestures with the accuracy of 84.2, 94.2, 95.8, and 98.8% when 1, 3, 5, 7 labeled samples are available, respectively, while the overall training process takes less than two minutes.\",\"PeriodicalId\":356322,\"journal\":{\"name\":\"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems\",\"volume\":\"31 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"33\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3485730.3485936\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3485730.3485936","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 33

摘要

基于wifi的人体手势识别(HGR)在无设备的人机交互中越来越有前景。然而,现有的基于wifi的方法由于可扩展性有限,特别是对于看不见的手势,还没有为现实世界的部署做好准备。这背后的原因是,当引入看不见的手势时,之前的工作需要收集大量的样本并重新训练模型。虽然近年来的几次学习技术的进步为解决这一问题带来了新的机会,但开销并没有得到有效的降低。这是因为这些方法仍然需要大量的数据来学习足够的先验知识,其复杂的训练过程加剧了常规训练成本。在本文中,我们提出了一个基于wifi的HGR系统,即OneFi,它只需要一个(或几个)标记样本就可以识别看不见的手势。OneFi从根本上解决了高开销的挑战。一方面,OneFi采用了虚拟手势生成机制,在数据采集过程中大大减轻了之前大量工作的工作量。另一方面,OneFi采用基于换向微调的轻量级一次性学习框架来消除模型再训练。我们还设计了一个基于自关注的骨干,称为WiFi变压器,以尽量减少所提出的框架的培训成本。我们使用商品WiFi设备建立了一个真实世界的测试平台,并在其上进行了广泛的实验。评估结果表明,当有1、3、5、7个标记样本时,OneFi识别未见手势的准确率分别为84.2、94.2、95.8和98.8%,而整个训练过程耗时不到2分钟。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
OneFi: One-Shot Recognition for Unseen Gesture via COTS WiFi
WiFi-based Human Gesture Recognition (HGR) becomes increasingly promising for device-free human-computer interaction. However, existing WiFi-based approaches have not been ready for real-world deployment due to the limited scalability, especially for unseen gestures. The reason behind is that when introducing unseen gestures, prior works have to collect a large number of samples and re-train the model. While the recent advance of few-shot learning has brought new opportunities to solve this problem, the overhead has not been effectively reduced. This is because these methods still require enormous data to learn adequate prior knowledge, and their complicated training process intensifies the regular training cost. In this paper, we propose a WiFi-based HGR system, namely OneFi, which can recognize unseen gestures with only one (or few) labeled samples. OneFi fundamentally addresses the challenge of high overhead. On the one hand, OneFi utilizes a virtual gesture generation mechanism such that the massive efforts in prior works can be significantly alleviated in the data collection process. On the other hand, OneFi employs a lightweight one-shot learning framework based on transductive fine-tuning to eliminate model re-training. We additionally design a self-attention based backbone, termed as WiFi Transformer, to minimize the training cost of the proposed framework. We establish a real-world testbed using commodity WiFi devices and perform extensive experiments over it. The evaluation results show that OneFi can recognize unseen gestures with the accuracy of 84.2, 94.2, 95.8, and 98.8% when 1, 3, 5, 7 labeled samples are available, respectively, while the overall training process takes less than two minutes.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Adaptive Video Transmission Strategy Based on Ising Machine Wavoice: A Noise-resistant Multi-modal Speech Recognition System Fusing mmWave and Audio Signals Experimental Scalability Study of Consortium Blockchains with BFT Consensus for IoT Automotive Use Case MoRe-Fi: Motion-robust and Fine-grained Respiration Monitoring via Deep-Learning UWB Radar FedMask
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1