Unsupervised Feature Extraction From Raw Data for Gesture Recognition With Wearable Ultralow-Power Ultrasound

Sergei Vostrikov;Matteo Anderegg;Luca Benini;Andrea Cossettini
{"title":"Unsupervised Feature Extraction From Raw Data for Gesture Recognition With Wearable Ultralow-Power Ultrasound","authors":"Sergei Vostrikov;Matteo Anderegg;Luca Benini;Andrea Cossettini","doi":"10.1109/TUFFC.2024.3404997","DOIUrl":null,"url":null,"abstract":"Wearable ultrasound (US) is a novel sensing approach that shows promise in multiple application domains, and specifically in hand gesture recognition (HGR). In fact, US enables to collect information from deep musculoskeletal structures at high spatiotemporal resolution and high signal-to-noise ratio, making it a perfect candidate to complement surface electromyography for improved accuracy performance and on-the-edge classification. However, existing wearable solutions for US-based gesture recognition are not sufficiently low power for continuous, long-term operation. On top of that, practical hardware limitations of wearable US devices (limited power budget, reduced wireless throughput, and restricted computational power) set the need for the compressed size of models for feature extraction and classification. To overcome these limitations, this article presents a novel end-to-end approach for feature extraction from raw musculoskeletal US data suited for edge computing, coupled with an armband for HGR based on a truly wearable (12 cm2, 9 g), ultralow-power (ULP) (16 mW) US probe. The proposed approach uses a 1-D convolutional autoencoder (CAE) to compress raw US data by \n<inline-formula> <tex-math>$20\\times $ </tex-math></inline-formula>\n while preserving the main amplitude features of the envelope signal. The latent features of the autoencoder are used to train an XGBoost classifier for HGR on datasets collected with a custom US armband, considering armband removal/repositioning in between sessions. Our approach achieves a classification accuracy of 96%. Furthermore, the proposed unsupervised feature extraction approach offers generalization capabilities for intersubject use, as demonstrated by testing the pretrained encoder on a different subject and conducting posttraining analysis, revealing that the operations performed by the encoder are subject-independent. The autoencoder is also quantized to 8-bit integers and deployed on a ULP wearable US probe along with the XGBoost classifier, allowing for a gesture recognition rate \n<inline-formula> <tex-math>$\\geq 25$ </tex-math></inline-formula>\n Hz and leading to 21% lower power consumption [at 30 frames/s (FPS)] compared to the conventional approach (raw data transmission and remote processing).","PeriodicalId":13322,"journal":{"name":"IEEE transactions on ultrasonics, ferroelectrics, and frequency control","volume":"71 7","pages":"831-841"},"PeriodicalIF":3.0000,"publicationDate":"2024-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on ultrasonics, ferroelectrics, and frequency control","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10538295/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Wearable ultrasound (US) is a novel sensing approach that shows promise in multiple application domains, and specifically in hand gesture recognition (HGR). In fact, US enables to collect information from deep musculoskeletal structures at high spatiotemporal resolution and high signal-to-noise ratio, making it a perfect candidate to complement surface electromyography for improved accuracy performance and on-the-edge classification. However, existing wearable solutions for US-based gesture recognition are not sufficiently low power for continuous, long-term operation. On top of that, practical hardware limitations of wearable US devices (limited power budget, reduced wireless throughput, and restricted computational power) set the need for the compressed size of models for feature extraction and classification. To overcome these limitations, this article presents a novel end-to-end approach for feature extraction from raw musculoskeletal US data suited for edge computing, coupled with an armband for HGR based on a truly wearable (12 cm2, 9 g), ultralow-power (ULP) (16 mW) US probe. The proposed approach uses a 1-D convolutional autoencoder (CAE) to compress raw US data by $20\times $ while preserving the main amplitude features of the envelope signal. The latent features of the autoencoder are used to train an XGBoost classifier for HGR on datasets collected with a custom US armband, considering armband removal/repositioning in between sessions. Our approach achieves a classification accuracy of 96%. Furthermore, the proposed unsupervised feature extraction approach offers generalization capabilities for intersubject use, as demonstrated by testing the pretrained encoder on a different subject and conducting posttraining analysis, revealing that the operations performed by the encoder are subject-independent. The autoencoder is also quantized to 8-bit integers and deployed on a ULP wearable US probe along with the XGBoost classifier, allowing for a gesture recognition rate $\geq 25$ Hz and leading to 21% lower power consumption [at 30 frames/s (FPS)] compared to the conventional approach (raw data transmission and remote processing).
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用可穿戴超低功耗超声波从原始数据中提取无监督特征进行手势识别。
可穿戴超声波是一种新颖的传感方法,在多个应用领域,特别是手势识别领域大有可为。事实上,超声波能以高时空分辨率和高信噪比从深层肌肉骨骼结构中收集信息,使其成为表面肌电图的完美补充,从而提高准确性和边缘分类能力。然而,现有的基于超声波的手势识别可穿戴解决方案功耗不够低,无法实现连续、长期的操作。此外,可穿戴超声设备的实际硬件限制(有限的功率预算、无线吞吐量降低、计算能力受限)使得特征提取和分类模型的大小需要压缩。为了克服这些限制,本文提出了一种新颖的端到端方法,用于从适合边缘计算的原始肌肉骨骼超声数据中提取特征,并基于真正可穿戴(12 平方厘米、9 克)、超低功耗(16 毫瓦)的超声探头,结合臂章进行手势识别。所提出的方法使用一维卷积自动编码器将原始超声波数据压缩 20 倍,同时保留包络信号的主要振幅特征。自动编码器的潜在特征用于训练 XGBoost 分类器,以便在使用定制的 US 臂带收集的数据集上进行手势识别,同时考虑到两次会话之间臂带的移除/重新定位。我们的方法达到了 96% 的分类准确率。此外,所提出的无监督特征提取方法还具有跨主体使用的泛化能力,这一点通过在不同主体上测试预训练编码器并进行训练后分析得以证明,编码器所执行的操作与主体无关。自动编码器还被量化为 8 位整数,并与 XGBoost 分类器一起部署在超低功耗可穿戴超声探头上,使手势识别率≥ 25 Hz,与传统方法(原始数据传输和远程处理)相比,功耗降低了 21%(30 FPS 时)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
7.70
自引率
16.70%
发文量
583
审稿时长
4.5 months
期刊介绍: IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control includes the theory, technology, materials, and applications relating to: (1) the generation, transmission, and detection of ultrasonic waves and related phenomena; (2) medical ultrasound, including hyperthermia, bioeffects, tissue characterization and imaging; (3) ferroelectric, piezoelectric, and piezomagnetic materials, including crystals, polycrystalline solids, films, polymers, and composites; (4) frequency control, timing and time distribution, including crystal oscillators and other means of classical frequency control, and atomic, molecular and laser frequency control standards. Areas of interest range from fundamental studies to the design and/or applications of devices and systems.
期刊最新文献
TinyProbe: A Wearable 32-channel Multi-Modal Wireless Ultrasound Probe. LSMD: Long-Short Memory-Based Detection Network for Carotid Artery Detection in B-mode Ultrasound Video Streams. A Phantom-Free Approach for Estimating the Backscatter Coefficient of Aggregated Red Blood Cells applied to COVID-19 Patients. High-frequency wearable ultrasound array belt for small animal echocardiography. Deep Power-aware Tunable Weighting for Ultrasound Microvascular Imaging.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1