首页 > 最新文献

Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.最新文献

英文 中文
Geryon: Edge Assisted Real-time and Robust Object Detection on Drones via mmWave Radar and Camera Fusion 边缘辅助实时和鲁棒目标检测无人机通过毫米波雷达和相机融合
Pub Date : 2022-01-01 DOI: 10.1145/3550298
Kaikai Deng, Dong Zhao, Qiaoyue Han, Shuyue Wang, Zihan Zhang, Anfu Zhou, Huadong Ma
Vision-based drone-view object detection suffers from severe performance degradation under adverse conditions (e.g., foggy weather, poor illumination). To remedy this, leveraging complementary mmWave radar has become a trend. However, existing fusion approaches seldom apply to drones due to i) the aggravated sparsity and noise of point clouds from low-cost commodity radars, and ii) explosive sensing data and intensive computations leading to high latency. To address these issues, we design Geryon , an edge assisted object detection system on drones, which utilizes a suit of approaches to fully exploit the complementary advantages of camera and mmWave radar on three levels: (i) a novel multi-frame compositing approach utilizes camera to assist radar to address the aggravated sparsity and noise of radar point clouds; (ii) a saliency area extraction and encoding approach utilizes radar to assist camera to reduce the bandwidth consumption and offloading latency; (iii) a parallel transmission and inference approach with a lightweight box enhancement scheme further reduces the offloading latency while ensuring the edge-side accuracy-latency trade-off by the parallelism and better camera-radar fusion. We implement and evaluate Geryon with four datasets we collect under foggy/rainy/snowy weather and poor illumination conditions, demonstrating its great advantages over other state-of-the-art approaches in terms of both accuracy and latency. CCS Concepts:
基于视觉的无人机视角目标检测在不利条件下(例如,雾天,光照不足)遭受严重的性能下降。为了解决这个问题,利用互补毫米波雷达已成为一种趋势。然而,现有的融合方法很少适用于无人机,因为1)来自低成本商用雷达的点云的稀疏性和噪声加剧,2)爆炸传感数据和密集的计算导致高延迟。为了解决这些问题,我们设计了无人机边缘辅助目标检测系统Geryon,该系统利用一系列方法在三个层面上充分利用相机和毫米波雷达的互补优势:(i)一种新颖的多帧合成方法利用相机辅助雷达解决雷达点云的稀疏性和噪声加剧;(ii)显著区域提取和编码方法利用雷达辅助相机,以减少带宽消耗和卸载延迟;(iii)采用轻量级盒增强方案的并行传输和推理方法进一步减少卸载延迟,同时通过并行性和更好的相机-雷达融合确保边缘精度-延迟权衡。我们使用在雾/雨/雪天气和恶劣光照条件下收集的四个数据集来实施和评估Geryon,证明其在准确性和延迟方面优于其他最先进的方法。CCS的概念:
{"title":"Geryon: Edge Assisted Real-time and Robust Object Detection on Drones via mmWave Radar and Camera Fusion","authors":"Kaikai Deng, Dong Zhao, Qiaoyue Han, Shuyue Wang, Zihan Zhang, Anfu Zhou, Huadong Ma","doi":"10.1145/3550298","DOIUrl":"https://doi.org/10.1145/3550298","url":null,"abstract":"Vision-based drone-view object detection suffers from severe performance degradation under adverse conditions (e.g., foggy weather, poor illumination). To remedy this, leveraging complementary mmWave radar has become a trend. However, existing fusion approaches seldom apply to drones due to i) the aggravated sparsity and noise of point clouds from low-cost commodity radars, and ii) explosive sensing data and intensive computations leading to high latency. To address these issues, we design Geryon , an edge assisted object detection system on drones, which utilizes a suit of approaches to fully exploit the complementary advantages of camera and mmWave radar on three levels: (i) a novel multi-frame compositing approach utilizes camera to assist radar to address the aggravated sparsity and noise of radar point clouds; (ii) a saliency area extraction and encoding approach utilizes radar to assist camera to reduce the bandwidth consumption and offloading latency; (iii) a parallel transmission and inference approach with a lightweight box enhancement scheme further reduces the offloading latency while ensuring the edge-side accuracy-latency trade-off by the parallelism and better camera-radar fusion. We implement and evaluate Geryon with four datasets we collect under foggy/rainy/snowy weather and poor illumination conditions, demonstrating its great advantages over other state-of-the-art approaches in terms of both accuracy and latency. CCS Concepts:","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"70 1","pages":"109:1-109:27"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72733418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
ClenchClick: Hands-Free Target Selection Method Leveraging Teeth-Clench for Augmented Reality ClenchClick:利用咬牙增强现实的免提目标选择方法
Pub Date : 2022-01-01 DOI: 10.1145/3550327
Xiyuan Shen, Yukang Yan, Chun Yu, Yuanchun Shi
We propose to explore teeth-clenching-based target selection in Augmented Reality (AR), as the subtlety in the interaction can be beneficial to applications occupying the user’s hand or that are sensitive to social norms. To support the investigation, we implemented an EMG-based teeth-clenching detection system (ClenchClick), where we adopted customized thresholds for different users. We first explored and compared the potential interaction design leveraging head movements and teeth clenching in combination. We finalized the interaction to take the form of a Point-and-Click manner with clenches as the confirmation mechanism. We evaluated the taskload and performance of ClenchClick by comparing it with two baseline methods in target selection tasks. Results showed that ClenchClick outperformed hand gestures in workload, physical load, accuracy and speed, and outperformed dwell in work load and temporal load. Lastly, through user studies, we demonstrated the advantage of ClenchClick in real-world tasks, including efficient and accurate hands-free target selection, natural and unobtrusive interaction in public, and robust head gesture input. investigated the interaction design, user experience in target selection tasks, and user performance in real-world tasks in a series of user studies. In our first user study, we explored nine potential designs and compared the three most promising designs (ClenchClick, ClenchCross-ingTarget, ClenchCrossingEdge) with a hand-based (Hand Gesture) and a hands-free (Dwell) baseline in target selection tasks. ClenchClick had the best overall user experience with the lowest workload. It outperformed Hand Gesture in both physical and temporal load, and outperformed Dwell in temporal and mental load. In the second study, we evaluated the performance of ClenchClick with two detection methods (General and Personalized), in comparison with a hand-based (Hand Gesture) and a hands-free (Dwell) baseline. Results showed that ClenchClick outperformed Hand Gesture in accuracy (98.9% v.s. 89.4%), and was comparable with Dwell in accuracy and efficiency. We further investigated users’ behavioral characteristics by analyzing their cursor trajectories in the tasks, which showed that ClenchClick was a smoother target selection method. It was more psychologically friendly and occupied less of the user’s attention. Finally, we conducted user studies in three real-world tasks which supported hands-free, social-friendly, and head gesture interaction. Results revealed that ClenchClick is an efficient and accurate target selection method when both hands are occupied. It is social-friendly and satisfying when performing in public, and can serve as activation to head gestures which significantly alleviates false positive issues.
我们建议在增强现实(AR)中探索基于咬牙的目标选择,因为交互中的微妙之处可能有利于占用用户的手或对社会规范敏感的应用程序。为了支持调查,我们实现了一个基于肌电图的咬牙检测系统(ClenchClick),我们为不同的用户采用了定制的阈值。我们首先探索并比较了头部运动和咬牙结合的潜在交互设计。我们最终确定了互动的形式,以点击的方式,握紧作为确认机制。通过与两种基线方法在目标选择任务中的比较,我们评估了ClenchClick的任务负载和性能。结果表明,ClenchClick在工作量、物理负荷、准确性和速度上优于手势,在工作负荷和时间负荷上优于驻留。最后,通过用户研究,我们展示了ClenchClick在现实世界任务中的优势,包括高效准确的免提目标选择,自然而不引人注目的公共互动,以及强大的头部手势输入。在一系列的用户研究中,研究了交互设计、目标选择任务中的用户体验和现实任务中的用户表现。在我们的第一个用户研究中,我们探索了九种潜在的设计,并将三种最有前途的设计(ClenchClick, ClenchCross-ingTarget, ClenchCrossingEdge)与基于手的(手势)和无手的(Dwell)基线在目标选择任务中进行比较。ClenchClick具有最佳的总体用户体验和最低的工作负载。它在身体和时间负荷上都优于手势,在时间和精神负荷上优于Dwell。在第二项研究中,我们用两种检测方法(通用和个性化)评估了ClenchClick的性能,并与基于手的(手势)和免提的(Dwell)基线进行了比较。结果表明,ClenchClick在准确率上优于Hand Gesture (98.9% vs . 89.4%),在准确率和效率上与Dwell相当。通过分析用户在任务中的光标轨迹,我们进一步研究了用户的行为特征,结果表明ClenchClick是一种更平滑的目标选择方法。它在心理上更友好,占用用户的注意力更少。最后,我们在三个支持免提、社交友好和头部手势交互的现实世界任务中进行了用户研究。结果表明,当双手被占用时,ClenchClick是一种高效、准确的目标选择方法。在公共场合表演时,它是社交友好的,令人满意的,并且可以作为头部动作的激活,显着减轻假阳性问题。
{"title":"ClenchClick: Hands-Free Target Selection Method Leveraging Teeth-Clench for Augmented Reality","authors":"Xiyuan Shen, Yukang Yan, Chun Yu, Yuanchun Shi","doi":"10.1145/3550327","DOIUrl":"https://doi.org/10.1145/3550327","url":null,"abstract":"We propose to explore teeth-clenching-based target selection in Augmented Reality (AR), as the subtlety in the interaction can be beneficial to applications occupying the user’s hand or that are sensitive to social norms. To support the investigation, we implemented an EMG-based teeth-clenching detection system (ClenchClick), where we adopted customized thresholds for different users. We first explored and compared the potential interaction design leveraging head movements and teeth clenching in combination. We finalized the interaction to take the form of a Point-and-Click manner with clenches as the confirmation mechanism. We evaluated the taskload and performance of ClenchClick by comparing it with two baseline methods in target selection tasks. Results showed that ClenchClick outperformed hand gestures in workload, physical load, accuracy and speed, and outperformed dwell in work load and temporal load. Lastly, through user studies, we demonstrated the advantage of ClenchClick in real-world tasks, including efficient and accurate hands-free target selection, natural and unobtrusive interaction in public, and robust head gesture input. investigated the interaction design, user experience in target selection tasks, and user performance in real-world tasks in a series of user studies. In our first user study, we explored nine potential designs and compared the three most promising designs (ClenchClick, ClenchCross-ingTarget, ClenchCrossingEdge) with a hand-based (Hand Gesture) and a hands-free (Dwell) baseline in target selection tasks. ClenchClick had the best overall user experience with the lowest workload. It outperformed Hand Gesture in both physical and temporal load, and outperformed Dwell in temporal and mental load. In the second study, we evaluated the performance of ClenchClick with two detection methods (General and Personalized), in comparison with a hand-based (Hand Gesture) and a hands-free (Dwell) baseline. Results showed that ClenchClick outperformed Hand Gesture in accuracy (98.9% v.s. 89.4%), and was comparable with Dwell in accuracy and efficiency. We further investigated users’ behavioral characteristics by analyzing their cursor trajectories in the tasks, which showed that ClenchClick was a smoother target selection method. It was more psychologically friendly and occupied less of the user’s attention. Finally, we conducted user studies in three real-world tasks which supported hands-free, social-friendly, and head gesture interaction. Results revealed that ClenchClick is an efficient and accurate target selection method when both hands are occupied. It is social-friendly and satisfying when performing in public, and can serve as activation to head gestures which significantly alleviates false positive issues.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"27 1","pages":"139:1-139:26"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73774807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
MobiVQA: Efficient On-Device Visual Question Answering MobiVQA:高效的设备上可视化问答
Pub Date : 2022-01-01 DOI: 10.1145/3534619
Qingqing Cao
Visual Question Answering (VQA) is a relatively new task where a user can ask a natural question about an image and obtain an answer. VQA is useful for many applications and is widely popular for users with visual impairments. Our goal is to design a VQA application that works efficiently on mobile devices without requiring cloud support. Such a system will allow users to ask visual questions privately, without having to send their questions to the cloud, while also reduce cloud communication costs. However, existing VQA applications use deep learning models that significantly improve accuracy, but is computationally heavy. Unfortunately, existing techniques that optimize deep learning for mobile devices cannot be applied for VQA because the VQA task is multi-modal—it requires both processing vision and text data. Existing mobile optimizations that work for vision-only or text-only neural networks cannot be applied here because of the dependencies between the two modes. Instead, we design MobiVQA, a set of optimizations that leverage the multi-modal nature of VQA. We show using extensive evaluation on two VQA testbeds and two mobile platforms, that MobiVQA significantly improves latency and energy with minimal accuracy loss compared to state-of-the-art VQA models. For instance, MobiVQA can answer a visual question in 163 milliseconds on the phone, compared to over 20-second latency incurred by the most accurate state-of-the-art model, while incurring less than 1 point reduction in accuracy.
视觉问答(VQA)是一项相对较新的任务,用户可以就图像提出一个自然的问题并获得答案。VQA对许多应用程序都很有用,并且在有视觉障碍的用户中广受欢迎。我们的目标是设计一个在移动设备上高效工作的VQA应用程序,而不需要云支持。这样的系统将允许用户私下提出可视化问题,而不必将他们的问题发送到云端,同时也降低了云通信成本。然而,现有的VQA应用程序使用深度学习模型,可以显着提高准确性,但计算量很大。不幸的是,现有的为移动设备优化深度学习的技术不能应用于VQA,因为VQA任务是多模态的——它需要同时处理视觉和文本数据。现有的仅用于视觉或纯文本神经网络的移动优化无法应用于此,因为这两种模式之间存在依赖关系。相反,我们设计了MobiVQA,这是一组利用VQA的多模态特性的优化。我们在两个VQA测试平台和两个移动平台上进行了广泛的评估,结果表明,与最先进的VQA模型相比,MobiVQA显著改善了延迟和能量,并且精度损失最小。例如,MobiVQA可以在163毫秒的时间内在手机上回答一个视觉问题,而最准确的最先进的模型需要超过20秒的延迟,而准确性降低不到1分。
{"title":"MobiVQA: Efficient On-Device Visual Question Answering","authors":"Qingqing Cao","doi":"10.1145/3534619","DOIUrl":"https://doi.org/10.1145/3534619","url":null,"abstract":"Visual Question Answering (VQA) is a relatively new task where a user can ask a natural question about an image and obtain an answer. VQA is useful for many applications and is widely popular for users with visual impairments. Our goal is to design a VQA application that works efficiently on mobile devices without requiring cloud support. Such a system will allow users to ask visual questions privately, without having to send their questions to the cloud, while also reduce cloud communication costs. However, existing VQA applications use deep learning models that significantly improve accuracy, but is computationally heavy. Unfortunately, existing techniques that optimize deep learning for mobile devices cannot be applied for VQA because the VQA task is multi-modal—it requires both processing vision and text data. Existing mobile optimizations that work for vision-only or text-only neural networks cannot be applied here because of the dependencies between the two modes. Instead, we design MobiVQA, a set of optimizations that leverage the multi-modal nature of VQA. We show using extensive evaluation on two VQA testbeds and two mobile platforms, that MobiVQA significantly improves latency and energy with minimal accuracy loss compared to state-of-the-art VQA models. For instance, MobiVQA can answer a visual question in 163 milliseconds on the phone, compared to over 20-second latency incurred by the most accurate state-of-the-art model, while incurring less than 1 point reduction in accuracy.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"107 1","pages":"44:1-44:23"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74642437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Earmonitor: In-ear Motion-resilient Acoustic Sensing Using Commodity Earphones 耳麦:使用普通耳机的入耳式运动弹性声学传感
Pub Date : 2022-01-01 DOI: 10.1145/3569472
Xue Sun, Jie Xiong, Chao Feng, Wenwen Deng, Xudong Wei, Dingyi Fang, Xiaojiang Chen
{"title":"Earmonitor: In-ear Motion-resilient Acoustic Sensing Using Commodity Earphones","authors":"Xue Sun, Jie Xiong, Chao Feng, Wenwen Deng, Xudong Wei, Dingyi Fang, Xiaojiang Chen","doi":"10.1145/3569472","DOIUrl":"https://doi.org/10.1145/3569472","url":null,"abstract":"","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"60 1","pages":"182:1-182:22"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74305576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WiAdv: Practical and Robust Adversarial Attack against WiFi-based Gesture Recognition System WiAdv:针对基于wifi的手势识别系统的实用且稳健的对抗性攻击
Pub Date : 2022-01-01 DOI: 10.1145/3534618
Yuxuan Zhou, Huangxun Chen, Chenyu Huang, Qian Zhang
WiFi-based gesture recognition systems have attracted enormous interest owing to the non-intrusive of WiFi signals and the wide adoption of WiFi for communication. Despite boosted performance via integrating advanced deep neural network (DNN) classifiers, there lacks sufficient investigation on their security vulnerabilities, which are rooted in the open nature of the wireless medium and the inherent defects (e.g., adversarial attacks) of classifiers. To fill this gap, we aim to study adversarial attacks to DNN-powered WiFi-based gesture recognition to encourage proper countermeasures. We design WiAdv to construct physically realizable adversarial examples to fool these systems. WiAdv features a signal synthesis scheme to craft adversarial signals with desired motion features based on the fundamental principle of WiFi-based gesture recognition, and a black-box attack scheme to handle the inconsistency between the perturbation space and the input space of the classifier caused by the in-between non-differentiable processing modules. We realize and evaluate our attack strategies against a representative state-of-the-art system, Widar3.0 in realistic settings. The experimental results show that the adversarial wireless signals generated by WiAdv achieve over 70% attack success rate on average, and remain robust and effective across different physical settings. Our attack case study and analysis reveal the vulnerability of WiFi-based gesture recognition systems, and we hope WiAdv could help promote the improvement of the relevant systems.
基于WiFi的手势识别系统由于WiFi信号的非侵入性和WiFi通信的广泛采用而引起了人们的极大兴趣。尽管通过集成高级深度神经网络(DNN)分类器提高了性能,但缺乏对其安全漏洞的充分调查,这些漏洞源于无线媒体的开放性和分类器的固有缺陷(例如,对抗性攻击)。为了填补这一空白,我们的目标是研究对基于dnn的wifi手势识别的对抗性攻击,以鼓励适当的对策。我们设计WiAdv来构建物理上可实现的对抗示例来欺骗这些系统。WiAdv采用基于wifi手势识别基本原理的信号合成方案,生成具有所需运动特征的对抗信号;采用黑盒攻击方案,处理中间不可微处理模块导致的扰动空间与分类器输入空间不一致的问题。我们在现实环境中实现并评估了针对具有代表性的最先进系统Widar3.0的攻击策略。实验结果表明,WiAdv生成的对抗性无线信号平均攻击成功率超过70%,并且在不同物理环境下都保持鲁棒性和有效性。我们的攻击案例研究和分析揭示了基于wifi的手势识别系统的脆弱性,我们希望WiAdv能够帮助推动相关系统的完善。
{"title":"WiAdv: Practical and Robust Adversarial Attack against WiFi-based Gesture Recognition System","authors":"Yuxuan Zhou, Huangxun Chen, Chenyu Huang, Qian Zhang","doi":"10.1145/3534618","DOIUrl":"https://doi.org/10.1145/3534618","url":null,"abstract":"WiFi-based gesture recognition systems have attracted enormous interest owing to the non-intrusive of WiFi signals and the wide adoption of WiFi for communication. Despite boosted performance via integrating advanced deep neural network (DNN) classifiers, there lacks sufficient investigation on their security vulnerabilities, which are rooted in the open nature of the wireless medium and the inherent defects (e.g., adversarial attacks) of classifiers. To fill this gap, we aim to study adversarial attacks to DNN-powered WiFi-based gesture recognition to encourage proper countermeasures. We design WiAdv to construct physically realizable adversarial examples to fool these systems. WiAdv features a signal synthesis scheme to craft adversarial signals with desired motion features based on the fundamental principle of WiFi-based gesture recognition, and a black-box attack scheme to handle the inconsistency between the perturbation space and the input space of the classifier caused by the in-between non-differentiable processing modules. We realize and evaluate our attack strategies against a representative state-of-the-art system, Widar3.0 in realistic settings. The experimental results show that the adversarial wireless signals generated by WiAdv achieve over 70% attack success rate on average, and remain robust and effective across different physical settings. Our attack case study and analysis reveal the vulnerability of WiFi-based gesture recognition systems, and we hope WiAdv could help promote the improvement of the relevant systems.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"176 1","pages":"92:1-92:25"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73444985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
IndexPen: Two-Finger Text Input with Millimeter-Wave Radar IndexPen:两指文本输入与毫米波雷达
Pub Date : 2022-01-01 DOI: 10.1145/3534601
Hao‐Shun Wei, Worcester, Li Ziheng, Alexander D. Galvan, SU Zhuoran, Xiao Zhang, E. Solovey, Hao‐Shun Wei, Ziheng Li, Alexander D. Galvan, Zhuoran Su, Xiao Zhang, K. Pahlavan
In this paper, we introduce IndexPen , a novel interaction technique for text input through two-finger in-air micro-gestures, enabling touch-free, effortless, tracking-based interaction, designed to mirror real-world writing. Our system is based on millimeter-wave radar sensing, and does not require instrumentation on the user. IndexPen can successfully identify 30 distinct gestures, representing the letters A-Z , as well as Space , Backspace , Enter , and a special Activation gesture to prevent unintentional input. Additionally, we include a noise class to differentiate gesture and non-gesture noise. We present our system design, including the radio frequency (RF) processing pipeline, classification model, and real-time detection algorithms. We further demonstrate our proof-of-concept system with data collected over ten days with five participants yielding 95.89% cross-validation accuracy on 31 classes (including noise ). Moreover, we explore the learnability and adaptability of our system for real-world text input with 16 participants who are first-time users to IndexPen over five sessions. After each session, the pre-trained model from the previous five-user study is calibrated on the data collected so far for a new user through transfer learning. The F-1 score showed an average increase of 9.14% per session with the calibration, reaching an average of 88.3% on the last session across the 16 users. Meanwhile, we show that the users can type sentences with IndexPen at 86.2% accuracy, measured by string similarity. This work builds a foundation and vision for future interaction interfaces that could be enabled with this paradigm.
在本文中,我们介绍了IndexPen,这是一种新颖的交互技术,通过两指空中微手势进行文本输入,实现无触摸,轻松,基于跟踪的交互,旨在反映现实世界的书写。我们的系统是基于毫米波雷达传感,不需要对用户的仪器。IndexPen可以成功识别30种不同的手势,代表字母a - z,以及空格、退格、Enter和一个特殊的激活手势,以防止无意的输入。此外,我们还包括一个噪声类来区分手势和非手势噪声。我们介绍了我们的系统设计,包括射频(RF)处理管道,分类模型和实时检测算法。我们进一步展示了我们的概念验证系统,在10天内收集了5名参与者的数据,在31个类别(包括噪声)上产生了95.89%的交叉验证准确率。此外,我们探索了我们的系统对现实世界文本输入的可学习性和适应性,16名参与者是第一次使用IndexPen的用户。每次会话结束后,通过迁移学习,根据到目前为止为新用户收集的数据对先前五个用户研究的预训练模型进行校准。经校正后,F-1评分平均每录得9.14%的升幅,最后一次录得的平均升幅为88.3%。与此同时,我们表明用户使用IndexPen输入句子的准确率为86.2%,以字符串相似度来衡量。这项工作为未来的交互界面构建了一个基础和远景,这些交互界面可以使用这个范例来实现。
{"title":"IndexPen: Two-Finger Text Input with Millimeter-Wave Radar","authors":"Hao‐Shun Wei, Worcester, Li Ziheng, Alexander D. Galvan, SU Zhuoran, Xiao Zhang, E. Solovey, Hao‐Shun Wei, Ziheng Li, Alexander D. Galvan, Zhuoran Su, Xiao Zhang, K. Pahlavan","doi":"10.1145/3534601","DOIUrl":"https://doi.org/10.1145/3534601","url":null,"abstract":"In this paper, we introduce IndexPen , a novel interaction technique for text input through two-finger in-air micro-gestures, enabling touch-free, effortless, tracking-based interaction, designed to mirror real-world writing. Our system is based on millimeter-wave radar sensing, and does not require instrumentation on the user. IndexPen can successfully identify 30 distinct gestures, representing the letters A-Z , as well as Space , Backspace , Enter , and a special Activation gesture to prevent unintentional input. Additionally, we include a noise class to differentiate gesture and non-gesture noise. We present our system design, including the radio frequency (RF) processing pipeline, classification model, and real-time detection algorithms. We further demonstrate our proof-of-concept system with data collected over ten days with five participants yielding 95.89% cross-validation accuracy on 31 classes (including noise ). Moreover, we explore the learnability and adaptability of our system for real-world text input with 16 participants who are first-time users to IndexPen over five sessions. After each session, the pre-trained model from the previous five-user study is calibrated on the data collected so far for a new user through transfer learning. The F-1 score showed an average increase of 9.14% per session with the calibration, reaching an average of 88.3% on the last session across the 16 users. Meanwhile, we show that the users can type sentences with IndexPen at 86.2% accuracy, measured by string similarity. This work builds a foundation and vision for future interaction interfaces that could be enabled with this paradigm.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"39 1","pages":"79:1-79:39"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87089522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
AmbiEar: mmWave Based Voice Recognition in NLoS Scenarios 基于毫米波的语音识别在NLoS场景
Pub Date : 2022-01-01 DOI: 10.1145/3550320
J. Zhang, Yinian Zhou, Rui Xi, Shuai Li, Junchen Guo, Yuan He
Millimeter wave (mmWave) based sensing is a significant technique that enables innovative smart applications, e.g., voice recognition. The existing works in this area require direct sensing of the human’s near-throat region and consequently have limited applicability in non-line-of-sight (NLoS) scenarios. This paper proposes AmbiEar, the first mmWave based voice recognition approach applicable in NLoS scenarios. AmbiEar is based on the insight that the human’s voice causes correlated vibrations of the surrounding objects, regardless of the human’s position and posture. Therefore, AmbiEar regards the surrounding objects as ears that can perceive sound and realizes indirect sensing of the human’s voice by sensing the vibration of the surrounding objects. By incorporating the designs like common component extraction, signal superimposition, and encoder-decoder network, AmbiEar tackles the challenges induced by low-SNR and distorted signals. We implement AmbiEar on a commercial mmWave radar and evaluate its performance under different settings. The experimental results show that AmbiEar has a word recognition accuracy of 87.21% in NLoS scenarios and reduces the recognition error by 35.1%, compared to the direct sensing approach.
基于毫米波(mmWave)的传感是一项重要的技术,可以实现创新的智能应用,例如语音识别。该领域的现有工作需要直接感知人类的近喉区域,因此在非视线(NLoS)场景中的适用性有限。本文提出了AmbiEar,这是第一个基于毫米波的语音识别方法,适用于NLoS场景。不管人的位置和姿势如何,人的声音都会引起周围物体的相关振动。因此,AmbiEar将周围的物体视为可以感知声音的耳朵,通过感知周围物体的振动来实现对人的声音的间接感知。通过整合通用分量提取、信号叠加和编码器-解码器网络等设计,AmbiEar解决了低信噪比和失真信号带来的挑战。我们在商用毫米波雷达上实现了AmbiEar,并在不同设置下评估了其性能。实验结果表明,与直接感知方法相比,AmbiEar在非自然语言场景下的单词识别准确率达到87.21%,识别误差降低35.1%。
{"title":"AmbiEar: mmWave Based Voice Recognition in NLoS Scenarios","authors":"J. Zhang, Yinian Zhou, Rui Xi, Shuai Li, Junchen Guo, Yuan He","doi":"10.1145/3550320","DOIUrl":"https://doi.org/10.1145/3550320","url":null,"abstract":"Millimeter wave (mmWave) based sensing is a significant technique that enables innovative smart applications, e.g., voice recognition. The existing works in this area require direct sensing of the human’s near-throat region and consequently have limited applicability in non-line-of-sight (NLoS) scenarios. This paper proposes AmbiEar, the first mmWave based voice recognition approach applicable in NLoS scenarios. AmbiEar is based on the insight that the human’s voice causes correlated vibrations of the surrounding objects, regardless of the human’s position and posture. Therefore, AmbiEar regards the surrounding objects as ears that can perceive sound and realizes indirect sensing of the human’s voice by sensing the vibration of the surrounding objects. By incorporating the designs like common component extraction, signal superimposition, and encoder-decoder network, AmbiEar tackles the challenges induced by low-SNR and distorted signals. We implement AmbiEar on a commercial mmWave radar and evaluate its performance under different settings. The experimental results show that AmbiEar has a word recognition accuracy of 87.21% in NLoS scenarios and reduces the recognition error by 35.1%, compared to the direct sensing approach.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"6 1","pages":"151:1-151:25"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87327725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
LoEar: Push the Range Limit of Acoustic Sensing for Vital Sign Monitoring LoEar:推动声学传感在生命体征监测中的范围极限
Pub Date : 2022-01-01 DOI: 10.1145/3550293
Lei Wang, Wei Li, Ke Sun, Fusang Zhang, Tao Gu, Chenren Xu, Daqing Zhang
Acoustic sensing has been explored in numerous applications leveraging the wide deployment of acoustic-enabled devices. However, most of the existing acoustic sensing systems work in a very short range only due to fast attenuation of ultrasonic signals, hindering their real-world deployment. In this paper, we present a novel acoustic sensing system using only a single microphone and speaker, named LoEar, to detect vital signs (respiration and heartbeat) with a significantly increased sensing range. We first develop a model, namely Carrierforming , to enhance the signal-to-noise ratio (SNR) via coherent superposition across multiple subcarriers on the target path. We then propose a novel technique called Continuous-MUSIC (Continuous-MUltiple SIgnal Classification) to detect a dynamic reflections, containing subtle motion, and further identify the target user based on the frequency distribution to enable Carrierforming . Finally, we adopt an adaptive Infinite Impulse Response (IIR) comb notch filter to recover the heartbeat pattern from the Channel Frequency Response (CFR) measurements which are dominated by respiration and further develop a peak-based scheme to estimate respiration rate and heart rate. We conduct extensive experiments to evaluate our system, and results show that our system outperforms the state-of-the-art using commercial devices, i.e., the range of respiration sensing is increased from 2 m to 7 m, and the range of heartbeat sensing is increased from 1.2 m to 6.5 m.
声学传感已经在许多应用中进行了探索,这些应用利用了声学启用设备的广泛部署。然而,由于超声波信号的快速衰减,大多数现有的声学传感系统只能在很短的范围内工作,这阻碍了它们在现实世界中的部署。在本文中,我们提出了一种新的声学传感系统,仅使用一个麦克风和扬声器,称为LoEar,以显著增加的传感范围检测生命体征(呼吸和心跳)。我们首先开发了一个模型,即载波成形,通过目标路径上多个子载波的相干叠加来提高信噪比(SNR)。然后,我们提出了一种名为Continuous-MUSIC (Continuous-MUltiple SIgnal Classification)的新技术来检测包含细微运动的动态反射,并根据频率分布进一步识别目标用户,从而实现载波成形。最后,我们采用自适应无限脉冲响应(IIR)梳状陷波滤波器从通道频率响应(CFR)测量中恢复心跳模式,通道频率响应(CFR)测量以呼吸为主,并进一步开发了基于峰值的方案来估计呼吸速率和心率。我们进行了大量的实验来评估我们的系统,结果表明我们的系统优于目前使用的商业设备,即呼吸传感范围从2米增加到7米,心跳传感范围从1.2米增加到6.5米。
{"title":"LoEar: Push the Range Limit of Acoustic Sensing for Vital Sign Monitoring","authors":"Lei Wang, Wei Li, Ke Sun, Fusang Zhang, Tao Gu, Chenren Xu, Daqing Zhang","doi":"10.1145/3550293","DOIUrl":"https://doi.org/10.1145/3550293","url":null,"abstract":"Acoustic sensing has been explored in numerous applications leveraging the wide deployment of acoustic-enabled devices. However, most of the existing acoustic sensing systems work in a very short range only due to fast attenuation of ultrasonic signals, hindering their real-world deployment. In this paper, we present a novel acoustic sensing system using only a single microphone and speaker, named LoEar, to detect vital signs (respiration and heartbeat) with a significantly increased sensing range. We first develop a model, namely Carrierforming , to enhance the signal-to-noise ratio (SNR) via coherent superposition across multiple subcarriers on the target path. We then propose a novel technique called Continuous-MUSIC (Continuous-MUltiple SIgnal Classification) to detect a dynamic reflections, containing subtle motion, and further identify the target user based on the frequency distribution to enable Carrierforming . Finally, we adopt an adaptive Infinite Impulse Response (IIR) comb notch filter to recover the heartbeat pattern from the Channel Frequency Response (CFR) measurements which are dominated by respiration and further develop a peak-based scheme to estimate respiration rate and heart rate. We conduct extensive experiments to evaluate our system, and results show that our system outperforms the state-of-the-art using commercial devices, i.e., the range of respiration sensing is increased from 2 m to 7 m, and the range of heartbeat sensing is increased from 1.2 m to 6.5 m.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"65 1","pages":"145:1-145:24"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76671850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
DiverSense: Maximizing Wi-Fi Sensing Range Leveraging Signal Diversity DiverSense:利用信号分集最大化Wi-Fi传感范围
Pub Date : 2022-01-01 DOI: 10.1145/3536393
LI YANG
The ubiquity of Wi-Fi infrastructure has facilitated the development of a range of Wi-Fi based sensing applications. Wi-Fi sensing relies on weak signal reflections from the human target and thus only supports a limited sensing range, which significantly hinders the real-world deployment of the proposed sensing systems. To extend the sensing range, traditional algorithms focus on suppressing the noise introduced by the imperfect Wi-Fi hardware. This paper picks a different direction and proposes to enhance the quality of the sensing signal by fully exploiting the signal diversity provided by the Wi-Fi hardware. We propose DiverSense, a system that combines sensing signal received from all subcarriers and all antennas in the array, to fully utilize the spatial and frequency diversity. To guarantee the diversity gain after signal combining, we also propose a time-diversity based signal alignment algorithm to align the phase of the multiple received sensing signals. We implement the proposed methods in a respiration monitoring system using commodity Wi-Fi devices and evaluate the performance in diverse environments. Extensive experimental results demonstrate that DiverSense is able to accurately monitor the human respiration even when the sensing signal is under noise floor, and therefore boosts sensing range to 40 meters , which is a 3 × improvement over the current state-of-the-art. DiverSense also works robustly under NLoS scenarios, e.g. , DiverSense is able to accurately monitor respiration even when the human and the Wi-Fi transceivers are separated by two concrete walls with wooden doors. between transceivers and the distance between transceivers is 11m. We close the door during the experiment. We ask the subject to sit in Room A (S1, S3, S4, S5) and breath normally. Results show that the mean absolute error is 0.15bpm, 0.09bpm, 0.16bpm, 0.22bpm, respectively. We then move Tx to T5 and there are
Wi-Fi基础设施的普及促进了一系列基于Wi-Fi的传感应用的发展。Wi-Fi传感依赖于来自人类目标的微弱信号反射,因此仅支持有限的传感范围,这严重阻碍了所提出的传感系统的实际部署。为了扩大感知范围,传统的算法侧重于抑制Wi-Fi硬件不完善带来的噪声。本文另辟蹊径,提出充分利用Wi-Fi硬件提供的信号分集来提高传感信号的质量。我们提出了一种将所有子载波和阵列中所有天线接收的传感信号结合起来的系统,以充分利用空间和频率分集。为了保证信号合并后的分集增益,我们还提出了一种基于时分集的信号对齐算法,对接收到的多个传感信号进行相位对齐。我们在使用商品Wi-Fi设备的呼吸监测系统中实现了所提出的方法,并评估了不同环境下的性能。大量的实验结果表明,即使在噪声底下,DiverSense也能够准确地监测人体呼吸,因此将传感范围提高到40米,这是目前最先进技术的3倍。在NLoS场景下,DiverSense也能很好地工作,例如,即使人类和Wi-Fi收发器被两个带木门的混凝土墙隔开,DiverSense也能准确地监测呼吸。收发器之间的距离为11m。我们在做实验时把门关上。我们要求受试者坐在房间A (S1, S3, S4, S5),正常呼吸。结果表明,平均绝对误差分别为0.15bpm、0.09bpm、0.16bpm、0.22bpm。然后我们把Tx移到T5
{"title":"DiverSense: Maximizing Wi-Fi Sensing Range Leveraging Signal Diversity","authors":"LI YANG","doi":"10.1145/3536393","DOIUrl":"https://doi.org/10.1145/3536393","url":null,"abstract":"The ubiquity of Wi-Fi infrastructure has facilitated the development of a range of Wi-Fi based sensing applications. Wi-Fi sensing relies on weak signal reflections from the human target and thus only supports a limited sensing range, which significantly hinders the real-world deployment of the proposed sensing systems. To extend the sensing range, traditional algorithms focus on suppressing the noise introduced by the imperfect Wi-Fi hardware. This paper picks a different direction and proposes to enhance the quality of the sensing signal by fully exploiting the signal diversity provided by the Wi-Fi hardware. We propose DiverSense, a system that combines sensing signal received from all subcarriers and all antennas in the array, to fully utilize the spatial and frequency diversity. To guarantee the diversity gain after signal combining, we also propose a time-diversity based signal alignment algorithm to align the phase of the multiple received sensing signals. We implement the proposed methods in a respiration monitoring system using commodity Wi-Fi devices and evaluate the performance in diverse environments. Extensive experimental results demonstrate that DiverSense is able to accurately monitor the human respiration even when the sensing signal is under noise floor, and therefore boosts sensing range to 40 meters , which is a 3 × improvement over the current state-of-the-art. DiverSense also works robustly under NLoS scenarios, e.g. , DiverSense is able to accurately monitor respiration even when the human and the Wi-Fi transceivers are separated by two concrete walls with wooden doors. between transceivers and the distance between transceivers is 11m. We close the door during the experiment. We ask the subject to sit in Room A (S1, S3, S4, S5) and breath normally. Results show that the mean absolute error is 0.15bpm, 0.09bpm, 0.16bpm, 0.22bpm, respectively. We then move Tx to T5 and there are","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"55 1","pages":"94:1-94:28"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84460485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
TransFloor: Transparent Floor Localization for Crowdsourcing Instant Delivery TransFloor:面向众包即时交付的透明地板定位
Pub Date : 2022-01-01 DOI: 10.1145/3569470
Zhiqing Xie, Haiyong Luo, Xiaotian Zhang, Hao Xiong, Fang Zhao, Zhaohui Li, Qi Ye, Bojie Rong, Jiuchong Gao
Smart on-demand delivery services require accurate indoor localization to enhance the system-human synergy experience of couriers in complex multi-story malls and platform construction. Floor localization is an essential part of indoor positioning, which can provide floor/altitude data support for upper-level 3D indoor navigation services (e.g., delivery route planning) to improve delivery efficiency and optimize order dispatching strategies. We argue that due to label dependence and device dependence, the existing floor localization methods cannot be flexibly deployed on a large scale in numerous multi-story malls across the country, nor can they apply to all couriers/users on the platform. This paper proposes a novel self-evolving and user-transparent floor localization system named TransFloor , based on crowdsourcing delivery data (e.g., order status and sensors data) without additional label investment and specialized equipment constraints. TransFloor consists of an unsupervised barometer-based module– IOD-TKPD and an NLP-inspired Wi-Fi-based module– Wifi2Vec , and Self-Labeling is a perfect bridge between both to completely achieve label-free and device-independent floor positioning. In addition, TransFloor is designed as a lightweight plugin embedded into the platform without refactoring the existing architecture, and it has been deployed nationwide to adaptively launch real-time accurate 3D/floor positioning services for numerous crowdsourcing couriers. We evaluate TransFloor on real-world records from an instant delivery platform (involving 672,282 orders, 7,390 couriers, and 6,206
智能按需配送服务需要精确的室内定位,以增强复杂的多层商场和平台建设中快递员的系统-人协同体验。楼层定位是室内定位的重要组成部分,可以为上层3D室内导航服务(如配送路线规划)提供楼层/高度数据支持,提高配送效率,优化订单调度策略。我们认为,由于标签依赖和设备依赖,现有的楼层定位方法不能在全国众多的多层商场中大规模灵活部署,也不能适用于平台上的所有快递员/用户。本文提出了一种新的自进化和用户透明的地板定位系统TransFloor,该系统基于众包交付数据(如订单状态和传感器数据),不需要额外的标签投资和专用设备约束。TransFloor由一个基于无监督气压计的模块IOD-TKPD和一个基于nlp的基于wi - fi的模块Wifi2Vec组成,而Self-Labeling是两者之间的完美桥梁,可以完全实现无标签和设备无关的地板定位。此外,TransFloor被设计为一个轻量级插件嵌入平台,无需重构现有架构,并已在全国范围内部署,为众多众包快递员自适应推出实时准确的3D/楼层定位服务。我们根据即时交付平台的真实记录(涉及672,282份订单,7,390名快递员和6,206名快递员)对TransFloor进行评估
{"title":"TransFloor: Transparent Floor Localization for Crowdsourcing Instant Delivery","authors":"Zhiqing Xie, Haiyong Luo, Xiaotian Zhang, Hao Xiong, Fang Zhao, Zhaohui Li, Qi Ye, Bojie Rong, Jiuchong Gao","doi":"10.1145/3569470","DOIUrl":"https://doi.org/10.1145/3569470","url":null,"abstract":"Smart on-demand delivery services require accurate indoor localization to enhance the system-human synergy experience of couriers in complex multi-story malls and platform construction. Floor localization is an essential part of indoor positioning, which can provide floor/altitude data support for upper-level 3D indoor navigation services (e.g., delivery route planning) to improve delivery efficiency and optimize order dispatching strategies. We argue that due to label dependence and device dependence, the existing floor localization methods cannot be flexibly deployed on a large scale in numerous multi-story malls across the country, nor can they apply to all couriers/users on the platform. This paper proposes a novel self-evolving and user-transparent floor localization system named TransFloor , based on crowdsourcing delivery data (e.g., order status and sensors data) without additional label investment and specialized equipment constraints. TransFloor consists of an unsupervised barometer-based module– IOD-TKPD and an NLP-inspired Wi-Fi-based module– Wifi2Vec , and Self-Labeling is a perfect bridge between both to completely achieve label-free and device-independent floor positioning. In addition, TransFloor is designed as a lightweight plugin embedded into the platform without refactoring the existing architecture, and it has been deployed nationwide to adaptively launch real-time accurate 3D/floor positioning services for numerous crowdsourcing couriers. We evaluate TransFloor on real-world records from an instant delivery platform (involving 672,282 orders, 7,390 couriers, and 6,206","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"144 1","pages":"189:1-189:30"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80413889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1