首页 > 最新文献

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies最新文献

英文 中文
TextureSight 纹理视图
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631413
Xue Wang, Yang Zhang
Objects engaged by users' hands contain rich contextual information for their strong correlation with user activities. Tools such as toothbrushes and wipes indicate cleansing and sanitation, while mice and keyboards imply work. Much research has been endeavored to sense hand-engaged objects to supply wearables with implicit interactions or ambient computing with personal informatics. We propose TextureSight, a smart-ring sensor that detects hand-engaged objects by detecting their distinctive surface textures using laser speckle imaging on a ring form factor. We conducted a two-day experience sampling study to investigate the unicity and repeatability of the object-texture combinations across routine objects. We grounded our sensing with a theoretical model and simulations, powered it with state-of-the-art deep neural net techniques, and evaluated it with a user study. TextureSight constitutes a valuable addition to the literature for its capability to sense passive objects without emission of EMI or vibration and its elimination of lens for preserving user privacy, leading to a new, practical method for activity recognition and context-aware computing.
用户双手接触的物品包含丰富的上下文信息,因为它们与用户的活动密切相关。牙刷和湿巾等工具表示清洁和卫生,而鼠标和键盘则表示工作。很多研究都致力于感知手部接触的物体,从而为可穿戴设备提供隐式交互,或为环境计算提供个人信息学。我们提出了一种智能环形传感器--TextureSight,它可以通过环形激光斑点成像技术检测手接触物体的独特表面纹理。我们进行了为期两天的经验取样研究,以调查日常物体纹理组合的统一性和可重复性。我们以理论模型和模拟作为传感的基础,利用最先进的深度神经网络技术为其提供动力,并通过用户研究对其进行评估。TextureSight 能够在不产生电磁干扰或振动的情况下感知被动物体,并且消除了保护用户隐私的镜头,从而为活动识别和情境感知计算提供了一种全新的实用方法,是对文献的宝贵补充。
{"title":"TextureSight","authors":"Xue Wang, Yang Zhang","doi":"10.1145/3631413","DOIUrl":"https://doi.org/10.1145/3631413","url":null,"abstract":"Objects engaged by users' hands contain rich contextual information for their strong correlation with user activities. Tools such as toothbrushes and wipes indicate cleansing and sanitation, while mice and keyboards imply work. Much research has been endeavored to sense hand-engaged objects to supply wearables with implicit interactions or ambient computing with personal informatics. We propose TextureSight, a smart-ring sensor that detects hand-engaged objects by detecting their distinctive surface textures using laser speckle imaging on a ring form factor. We conducted a two-day experience sampling study to investigate the unicity and repeatability of the object-texture combinations across routine objects. We grounded our sensing with a theoretical model and simulations, powered it with state-of-the-art deep neural net techniques, and evaluated it with a user study. TextureSight constitutes a valuable addition to the literature for its capability to sense passive objects without emission of EMI or vibration and its elimination of lens for preserving user privacy, leading to a new, practical method for activity recognition and context-aware computing.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"12 6","pages":"1 - 27"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LiqDetector LiqDetector
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631443
Zhu Wang, Yifan Guo, Zhihui Ren, Wenchao Song, Zhuo Sun, Chaoxiong Chen, Bin Guo, Zhiwen Yu
With the advancement of wireless sensing technologies, RF-based contact-less liquid detection attracts more and more attention. Compared with other RF devices, the mmWave radar has the advantages of large bandwidth and low cost. While existing radar-based liquid detection systems demonstrate promising performance, they still have a shortcoming that in the detection result depends on container-related factors (e.g., container placement, container caliber, and container material). In this paper, to enable container-independent liquid detection with a COTS mmWave radar, we propose a dual-reflection model by exploring reflections from different interfaces of the liquid container. Specifically, we design a pair of amplitude ratios based on the signals reflected from different interfaces, and theoretically demonstrate how the refractive index of liquids can be estimated by eliminating the container's impact. To validate the proposed approach, we implement a liquid detection system LiqDetector. Experimental results show that LiqDetector achieves cross-container estimation of the liquid's refractive index with a mean absolute percentage error (MAPE) of about 4.4%. Moreover, the classification accuracies for 6 different liquids and alcohol with different strengths (even a difference of 1%) exceed 96% and 95%, respectively. To the best of our knowledge, this is the first study that achieves container-independent liquid detection based on the COTS mmWave radar by leveraging only one pair of Tx-Rx antennas.
随着无线传感技术的发展,基于射频的非接触式液体检测技术越来越受到人们的关注。与其他射频设备相比,毫米波雷达具有带宽大、成本低的优点。虽然现有的基于雷达的液体检测系统性能良好,但仍存在检测结果取决于容器相关因素(如容器位置、容器口径和容器材料)的缺陷。在本文中,为了利用 COTS mmWave 雷达实现与容器无关的液体检测,我们通过探索液体容器不同界面的反射,提出了一种双反射模型。具体来说,我们根据不同界面反射的信号设计了一对振幅比,并从理论上证明了如何通过消除容器的影响来估计液体的折射率。为了验证所提出的方法,我们实现了液体检测系统 LiqDetector。实验结果表明,LiqDetector 实现了液体折射率的跨容器估计,平均绝对百分比误差(MAPE)约为 4.4%。此外,对 6 种不同强度的液体和酒精(即使相差 1%)的分类准确率分别超过 96% 和 95%。据我们所知,这是首次基于 COTS mmWave 雷达,仅利用一对 Tx-Rx 天线就能实现不受容器影响的液体检测的研究。
{"title":"LiqDetector","authors":"Zhu Wang, Yifan Guo, Zhihui Ren, Wenchao Song, Zhuo Sun, Chaoxiong Chen, Bin Guo, Zhiwen Yu","doi":"10.1145/3631443","DOIUrl":"https://doi.org/10.1145/3631443","url":null,"abstract":"With the advancement of wireless sensing technologies, RF-based contact-less liquid detection attracts more and more attention. Compared with other RF devices, the mmWave radar has the advantages of large bandwidth and low cost. While existing radar-based liquid detection systems demonstrate promising performance, they still have a shortcoming that in the detection result depends on container-related factors (e.g., container placement, container caliber, and container material). In this paper, to enable container-independent liquid detection with a COTS mmWave radar, we propose a dual-reflection model by exploring reflections from different interfaces of the liquid container. Specifically, we design a pair of amplitude ratios based on the signals reflected from different interfaces, and theoretically demonstrate how the refractive index of liquids can be estimated by eliminating the container's impact. To validate the proposed approach, we implement a liquid detection system LiqDetector. Experimental results show that LiqDetector achieves cross-container estimation of the liquid's refractive index with a mean absolute percentage error (MAPE) of about 4.4%. Moreover, the classification accuracies for 6 different liquids and alcohol with different strengths (even a difference of 1%) exceed 96% and 95%, respectively. To the best of our knowledge, this is the first study that achieves container-independent liquid detection based on the COTS mmWave radar by leveraging only one pair of Tx-Rx antennas.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"12 29","pages":"1 - 24"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scribe 抄写员
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631411
Yang Bai, Irtaza Shahid, Harshvardhan Takawale, Nirupam Roy
This paper presents the design and implementation of Scribe, a comprehensive voice processing and handwriting interface for voice assistants. Distinct from prior works, Scribe is a precise tracking interface that can co-exist with the voice interface on low sampling rate voice assistants. Scribe can be used for 3D free-form drawing, writing, and motion tracking for gaming. Taking handwriting as a specific application, it can also capture natural strokes and the individualized style of writing while occupying only a single frequency. The core technique includes an accurate acoustic ranging method called Cross Frequency Continuous Wave (CFCW) sonar, enabling voice assistants to use ultrasound as a ranging signal while using the regular microphone system of voice assistants as a receiver. We also design a new optimization algorithm that only requires a single frequency for time difference of arrival. Scribe prototype achieves 73 μm of median error for 1D ranging and 1.4 mm of median error in 3D tracking of an acoustic beacon using the microphone array used in voice assistants. Our implementation of an in-air handwriting interface achieves 94.1% accuracy with automatic handwriting-to-text software, similar to writing on paper (96.6%). At the same time, the error rate of voice-based user authentication only increases from 6.26% to 8.28%.
本文介绍了 Scribe 的设计和实现,这是一个用于语音助手的综合语音处理和手写界面。与之前的作品不同,Scribe 是一种精确的跟踪界面,可以与低采样率语音助手的语音界面共存。Scribe 可用于三维自由形态绘画、书写和游戏中的动作跟踪。以手写为例,它还可以捕捉自然的笔画和个性化的书写风格,同时只占用一个频率。核心技术包括一种名为 "跨频连续波(CFCW)声纳 "的精确声学测距方法,使语音助手能够使用超声波作为测距信号,同时使用语音助手的常规麦克风系统作为接收器。我们还设计了一种新的优化算法,该算法只需要单一频率的到达时间差。Scribe 原型实现了 73 μm 的一维测距中值误差和 1.4 mm 的声信标三维跟踪中值误差(使用语音助手中使用的麦克风阵列)。我们实现的空中手写界面通过自动手写到文本软件达到了 94.1% 的准确率,与在纸上书写(96.6%)的准确率相似。同时,基于语音的用户身份验证的错误率仅从 6.26% 增加到 8.28%。
{"title":"Scribe","authors":"Yang Bai, Irtaza Shahid, Harshvardhan Takawale, Nirupam Roy","doi":"10.1145/3631411","DOIUrl":"https://doi.org/10.1145/3631411","url":null,"abstract":"This paper presents the design and implementation of Scribe, a comprehensive voice processing and handwriting interface for voice assistants. Distinct from prior works, Scribe is a precise tracking interface that can co-exist with the voice interface on low sampling rate voice assistants. Scribe can be used for 3D free-form drawing, writing, and motion tracking for gaming. Taking handwriting as a specific application, it can also capture natural strokes and the individualized style of writing while occupying only a single frequency. The core technique includes an accurate acoustic ranging method called Cross Frequency Continuous Wave (CFCW) sonar, enabling voice assistants to use ultrasound as a ranging signal while using the regular microphone system of voice assistants as a receiver. We also design a new optimization algorithm that only requires a single frequency for time difference of arrival. Scribe prototype achieves 73 μm of median error for 1D ranging and 1.4 mm of median error in 3D tracking of an acoustic beacon using the microphone array used in voice assistants. Our implementation of an in-air handwriting interface achieves 94.1% accuracy with automatic handwriting-to-text software, similar to writing on paper (96.6%). At the same time, the error rate of voice-based user authentication only increases from 6.26% to 8.28%.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"3 3","pages":"1 - 31"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PmTrack PmTrack
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631433
Hankai Liu, Xiulong Liu, Xin Xie, Xinyu Tong, Keqiu Li
The difficulty in obtaining targets' identity poses a significant obstacle to the pursuit of personalized and customized millimeter-wave (mmWave) sensing. Existing solutions that learn individual differences from signal features have limitations in practical applications. This paper presents a Personalized mmWave-based human Tracking system, PmTrack, by introducing inertial measurement units (IMUs) as identity indicators. Widely available in portable devices such as smartwatches and smartphones, IMUs utilize existing wireless networks for data uploading of identity and data, and are therefore able to assist in radar target identification in a lightweight manner with little deployment and carrying burden for users. PmTrack innovatively adopts orientation as the matching feature, thus well overcoming the data heterogeneity between radar and IMU while avoiding the effect of cumulative errors. In the implementation of PmTrack, we propose a comprehensive set of optimization methods in detection enhancement, interference suppression, continuity maintenance, and trajectory correction, which successfully solved a series of practical problems caused by the three major challenges of weak reflection, point cloud overlap, and body-bounce ghost in multi-person tracking. In addition, an orientation correction method is proposed to overcome the IMU gimbal lock. Extensive experimental results demonstrate that PmTrack achieves an identification accuracy of 98% and 95% with five people in the hall and meeting room, respectively.
难以获得目标身份是实现个性化和定制毫米波(mmWave)传感的一大障碍。从信号特征中学习个体差异的现有解决方案在实际应用中存在局限性。本文通过引入惯性测量单元(IMU)作为身份指示器,提出了基于毫米波的个性化人体跟踪系统 PmTrack。惯性测量单元广泛应用于智能手表和智能手机等便携式设备,利用现有的无线网络上传身份和数据,因此能够以轻便的方式协助雷达目标识别,而且用户的部署和携带负担很小。PmTrack 创新性地采用了方位作为匹配特征,从而很好地克服了雷达和 IMU 之间的数据异质性,同时避免了累积误差的影响。在 PmTrack 的实现过程中,我们在检测增强、干扰抑制、连续性保持和轨迹校正等方面提出了一整套优化方法,成功解决了多人跟踪中弱反射、点云重叠和人体弹跳鬼影三大难题所带来的一系列实际问题。此外,还提出了克服 IMU 万向节锁定的方向校正方法。大量实验结果表明,PmTrack 在大厅和会议室中的五人识别准确率分别达到了 98% 和 95%。
{"title":"PmTrack","authors":"Hankai Liu, Xiulong Liu, Xin Xie, Xinyu Tong, Keqiu Li","doi":"10.1145/3631433","DOIUrl":"https://doi.org/10.1145/3631433","url":null,"abstract":"The difficulty in obtaining targets' identity poses a significant obstacle to the pursuit of personalized and customized millimeter-wave (mmWave) sensing. Existing solutions that learn individual differences from signal features have limitations in practical applications. This paper presents a Personalized mmWave-based human Tracking system, PmTrack, by introducing inertial measurement units (IMUs) as identity indicators. Widely available in portable devices such as smartwatches and smartphones, IMUs utilize existing wireless networks for data uploading of identity and data, and are therefore able to assist in radar target identification in a lightweight manner with little deployment and carrying burden for users. PmTrack innovatively adopts orientation as the matching feature, thus well overcoming the data heterogeneity between radar and IMU while avoiding the effect of cumulative errors. In the implementation of PmTrack, we propose a comprehensive set of optimization methods in detection enhancement, interference suppression, continuity maintenance, and trajectory correction, which successfully solved a series of practical problems caused by the three major challenges of weak reflection, point cloud overlap, and body-bounce ghost in multi-person tracking. In addition, an orientation correction method is proposed to overcome the IMU gimbal lock. Extensive experimental results demonstrate that PmTrack achieves an identification accuracy of 98% and 95% with five people in the hall and meeting room, respectively.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"3 8","pages":"1 - 30"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AdaStreamLite AdaStreamLite
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631460
Yuheng Wei, Jie Xiong, Hui Liu, Yingtao Yu, Jiangtao Pan, Junzhao Du
Streaming speech recognition aims to transcribe speech to text in a streaming manner, providing real-time speech interaction for smartphone users. However, it is not trivial to develop a high-performance streaming speech recognition system purely running on mobile platforms, due to the complex real-world acoustic environments and the limited computational resources of smartphones. Most existing solutions lack the generalization to unseen environments and have difficulty to work with streaming speech. In this paper, we design AdaStreamLite, an environment-adaptive streaming speech recognition tool for smartphones. AdaStreamLite interacts with its surroundings to capture the characteristics of the current acoustic environment to improve the robustness against ambient noise in a lightweight manner. We design an environment representation extractor to model acoustic environments with compact feature vectors, and construct a representation lookup table to improve the generalization of AdaStreamLite to unseen environments. We train our system using large speech datasets publicly available covering different languages. We conduct experiments in a large range of real acoustic environments with different smartphones. The results show that AdaStreamLite outperforms the state-of-the-art methods in terms of recognition accuracy, computational resource consumption and robustness against unseen environments.
流式语音识别旨在以流式方式将语音转录为文本,为智能手机用户提供实时语音交互。然而,由于现实世界的声学环境复杂,智能手机的计算资源有限,要开发一个纯粹在移动平台上运行的高性能流式语音识别系统并非易事。大多数现有解决方案缺乏对未知环境的泛化能力,并且难以处理流式语音。在本文中,我们设计了 AdaStreamLite,一种用于智能手机的环境适应型流式语音识别工具。AdaStreamLite 可与周围环境互动,捕捉当前声学环境的特征,从而以轻量级方式提高对环境噪声的鲁棒性。我们设计了一个环境表征提取器,用紧凑的特征向量对声学环境进行建模,并构建了一个表征查找表,以提高 AdaStreamLite 对未知环境的泛化能力。我们使用公开的涵盖不同语言的大型语音数据集来训练我们的系统。我们使用不同的智能手机在大量真实的声学环境中进行了实验。结果表明,AdaStreamLite 在识别准确率、计算资源消耗和对未知环境的鲁棒性方面都优于最先进的方法。
{"title":"AdaStreamLite","authors":"Yuheng Wei, Jie Xiong, Hui Liu, Yingtao Yu, Jiangtao Pan, Junzhao Du","doi":"10.1145/3631460","DOIUrl":"https://doi.org/10.1145/3631460","url":null,"abstract":"Streaming speech recognition aims to transcribe speech to text in a streaming manner, providing real-time speech interaction for smartphone users. However, it is not trivial to develop a high-performance streaming speech recognition system purely running on mobile platforms, due to the complex real-world acoustic environments and the limited computational resources of smartphones. Most existing solutions lack the generalization to unseen environments and have difficulty to work with streaming speech. In this paper, we design AdaStreamLite, an environment-adaptive streaming speech recognition tool for smartphones. AdaStreamLite interacts with its surroundings to capture the characteristics of the current acoustic environment to improve the robustness against ambient noise in a lightweight manner. We design an environment representation extractor to model acoustic environments with compact feature vectors, and construct a representation lookup table to improve the generalization of AdaStreamLite to unseen environments. We train our system using large speech datasets publicly available covering different languages. We conduct experiments in a large range of real acoustic environments with different smartphones. The results show that AdaStreamLite outperforms the state-of-the-art methods in terms of recognition accuracy, computational resource consumption and robustness against unseen environments.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"11 22","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JoulesEye 焦耳之眼
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631422
Rishiraj Adhikary, M. Sadeh, N. Batra, Mayank Goel
Smartphones and smartwatches have contributed significantly to fitness monitoring by providing real-time statistics, thanks to accurate tracking of physiological indices such as heart rate. However, the estimation of calories burned during exercise is inaccurate and cannot be used for medical diagnosis. In this work, we present JoulesEye, a smartphone thermal camera-based system that can accurately estimate calorie burn by monitoring respiration rate. We evaluated JoulesEye on 54 participants who performed high intensity cycling and running. The mean absolute percentage error (MAPE) of JoulesEye was 5.8%, which is significantly better than the MAPE of 37.6% observed with commercial smartwatch-based methods that only use heart rate. Finally, we show that an ultra-low-resolution thermal camera that is small enough to fit inside a watch or other wearables is sufficient for accurate calorie burn estimation. These results suggest that JoulesEye is a promising new method for accurate and reliable calorie burn estimation.
智能手机和智能手表通过准确跟踪心率等生理指标,提供实时统计数据,为健身监测做出了巨大贡献。然而,对运动过程中消耗的卡路里的估算并不准确,不能用于医疗诊断。在这项工作中,我们介绍了基于智能手机热像仪的 JoulesEye 系统,该系统可通过监测呼吸频率准确估算卡路里消耗量。我们对 54 名进行高强度自行车和跑步运动的参与者进行了 JoulesEye 评估。JoulesEye 的平均绝对百分比误差 (MAPE) 为 5.8%,明显优于仅使用心率的商用智能手表方法的 37.6%。最后,我们表明,小巧的超低分辨率红外热像仪足以安装在手表或其他可穿戴设备中,用于准确估算卡路里消耗量。这些结果表明,JoulesEye 是一种有前途的新方法,可用于准确可靠的卡路里消耗估算。
{"title":"JoulesEye","authors":"Rishiraj Adhikary, M. Sadeh, N. Batra, Mayank Goel","doi":"10.1145/3631422","DOIUrl":"https://doi.org/10.1145/3631422","url":null,"abstract":"Smartphones and smartwatches have contributed significantly to fitness monitoring by providing real-time statistics, thanks to accurate tracking of physiological indices such as heart rate. However, the estimation of calories burned during exercise is inaccurate and cannot be used for medical diagnosis. In this work, we present JoulesEye, a smartphone thermal camera-based system that can accurately estimate calorie burn by monitoring respiration rate. We evaluated JoulesEye on 54 participants who performed high intensity cycling and running. The mean absolute percentage error (MAPE) of JoulesEye was 5.8%, which is significantly better than the MAPE of 37.6% observed with commercial smartwatch-based methods that only use heart rate. Finally, we show that an ultra-low-resolution thermal camera that is small enough to fit inside a watch or other wearables is sufficient for accurate calorie burn estimation. These results suggest that JoulesEye is a promising new method for accurate and reliable calorie burn estimation.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"10 51","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aragorn 阿拉贡
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631406
Harish Venugopalan, Z. Din, Trevor Carpenter, Jason Lowe-Power, Samuel T. King, Zubair Shafiq
Mobile app developers often rely on cameras to implement rich features. However, giving apps unfettered access to the mobile camera poses a privacy threat when camera frames capture sensitive information that is not needed for the app's functionality. To mitigate this threat, we present Aragorn, a novel privacy-enhancing mobile camera system that provides fine grained control over what information can be present in camera frames before apps can access them. Aragorn automatically sanitizes camera frames by detecting regions that are essential to an app's functionality and blocking out everything else to protect privacy while retaining app utility. Aragorn can cater to a wide range of camera apps and incorporates knowledge distillation and crowdsourcing to extend robust support to previously unsupported apps. In our evaluations, we see that, with no degradation in utility, Aragorn detects credit cards with 89% accuracy and faces with 100% accuracy in context of credit card scanning and face recognition respectively. We show that Aragorn's implementation in the Android camera subsystem only suffers an average drop of 0.01 frames per second in frame rate. Our evaluations show that the overhead incurred by Aragorn to system performance is reasonable.
移动应用程序开发人员通常依靠摄像头来实现丰富的功能。然而,当相机帧捕捉到应用程序功能所不需要的敏感信息时,让应用程序不受限制地访问移动摄像头就会对隐私构成威胁。为了减轻这种威胁,我们推出了 Aragorn,这是一种新颖的隐私增强型移动摄像头系统,可在应用程序访问摄像头之前对摄像头帧中的信息进行细粒度控制。Aragorn 通过检测对应用程序功能至关重要的区域,自动对相机帧进行净化,并屏蔽其他所有信息,从而在保护隐私的同时保留应用程序的实用性。Aragorn 可以满足各种相机应用程序的需求,并结合知识提炼和众包,为以前不支持的应用程序提供强大的支持。在评估中,我们发现在不降低实用性的情况下,Aragorn 在信用卡扫描和人脸识别方面的检测准确率分别为 89%和 100%。我们还发现,Aragorn 在安卓相机子系统中的实现仅导致帧速率平均每秒下降 0.01 帧。我们的评估结果表明,Aragorn 对系统性能的影响是合理的。
{"title":"Aragorn","authors":"Harish Venugopalan, Z. Din, Trevor Carpenter, Jason Lowe-Power, Samuel T. King, Zubair Shafiq","doi":"10.1145/3631406","DOIUrl":"https://doi.org/10.1145/3631406","url":null,"abstract":"Mobile app developers often rely on cameras to implement rich features. However, giving apps unfettered access to the mobile camera poses a privacy threat when camera frames capture sensitive information that is not needed for the app's functionality. To mitigate this threat, we present Aragorn, a novel privacy-enhancing mobile camera system that provides fine grained control over what information can be present in camera frames before apps can access them. Aragorn automatically sanitizes camera frames by detecting regions that are essential to an app's functionality and blocking out everything else to protect privacy while retaining app utility. Aragorn can cater to a wide range of camera apps and incorporates knowledge distillation and crowdsourcing to extend robust support to previously unsupported apps. In our evaluations, we see that, with no degradation in utility, Aragorn detects credit cards with 89% accuracy and faces with 100% accuracy in context of credit card scanning and face recognition respectively. We show that Aragorn's implementation in the Android camera subsystem only suffers an average drop of 0.01 frames per second in frame rate. Our evaluations show that the overhead incurred by Aragorn to system performance is reasonable.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"4 4","pages":"1 - 31"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling WiFi Sensing on New-generation WiFi Cards 在新一代 WiFi 卡上启用 WiFi 传感功能
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3633807
E. Yi, Fusang Zhang, Jie Xiong, Kai Niu, Zhiyun Yao, Daqing Zhang
The last few years have witnessed the rapid development of WiFi sensing with a large spectrum of applications enabled. However, existing works mainly leverage the obsolete 802.11n WiFi cards (i.e., Intel 5300 and Atheros AR9k series cards) for sensing. On the other hand, the mainstream WiFi protocols currently in use are 802.11ac/ax and commodity WiFi products on the market are equipped with new-generation WiFi chips such as Broadcom BCM43794 and Qualcomm QCN5054. After conducting some benchmark experiments, we find that WiFi sensing has problems working on these new cards. The new communication features (e.g., MU-MIMO) designed to facilitate data transmissions negatively impact WiFi sensing. Conventional CSI base signals such as CSI amplitude and/or CSI phase difference between antennas which worked well on Intel 5300 802.11n WiFi card may fail on new cards. In this paper, we propose delicate signal processing schemes to make wireless sensing work well on these new WiFi cards. We employ two typical sensing applications, i.e., human respiration monitoring and human trajectory tracking to demonstrate the effectiveness of the proposed schemes. We believe it is critical to ensure WiFi sensing compatible with the latest WiFi protocols and this work moves one important step towards real-life adoption of WiFi sensing.
过去几年来,WiFi 传感技术发展迅速,应用范围广泛。然而,现有作品主要利用过时的 802.11n WiFi 卡(即英特尔 5300 和创锐讯 AR9k 系列卡)进行传感。另一方面,目前使用的主流 WiFi 协议是 802.11ac/ax,而市场上的商品 WiFi 产品都配备了新一代 WiFi 芯片,如 Broadcom BCM43794 和 Qualcomm QCN5054。在进行了一些基准实验后,我们发现 WiFi 传感在这些新卡上的工作存在问题。为促进数据传输而设计的新通信功能(如 MU-MIMO)对 WiFi 传感产生了负面影响。在英特尔 5300 802.11n WiFi 卡上运行良好的传统 CSI 基本信号(如 CSI 幅值和/或天线间 CSI 相位差)在新卡上可能会失效。在本文中,我们提出了精细的信号处理方案,以使无线传感在这些新的 WiFi 卡上运行良好。我们采用了两个典型的传感应用,即人体呼吸监测和人体轨迹跟踪,来证明所提方案的有效性。我们认为,确保 WiFi 传感与最新的 WiFi 协议兼容至关重要,这项工作向 WiFi 传感在现实生活中的应用迈出了重要一步。
{"title":"Enabling WiFi Sensing on New-generation WiFi Cards","authors":"E. Yi, Fusang Zhang, Jie Xiong, Kai Niu, Zhiyun Yao, Daqing Zhang","doi":"10.1145/3633807","DOIUrl":"https://doi.org/10.1145/3633807","url":null,"abstract":"The last few years have witnessed the rapid development of WiFi sensing with a large spectrum of applications enabled. However, existing works mainly leverage the obsolete 802.11n WiFi cards (i.e., Intel 5300 and Atheros AR9k series cards) for sensing. On the other hand, the mainstream WiFi protocols currently in use are 802.11ac/ax and commodity WiFi products on the market are equipped with new-generation WiFi chips such as Broadcom BCM43794 and Qualcomm QCN5054. After conducting some benchmark experiments, we find that WiFi sensing has problems working on these new cards. The new communication features (e.g., MU-MIMO) designed to facilitate data transmissions negatively impact WiFi sensing. Conventional CSI base signals such as CSI amplitude and/or CSI phase difference between antennas which worked well on Intel 5300 802.11n WiFi card may fail on new cards. In this paper, we propose delicate signal processing schemes to make wireless sensing work well on these new WiFi cards. We employ two typical sensing applications, i.e., human respiration monitoring and human trajectory tracking to demonstrate the effectiveness of the proposed schemes. We believe it is critical to ensure WiFi sensing compatible with the latest WiFi protocols and this work moves one important step towards real-life adoption of WiFi sensing.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"7 4","pages":"1 - 26"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ClearSpeech ClearSpeech
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631409
Dong Ma, Ting Dang, Ming Ding, Rajesh Balan
Wireless earbuds have been gaining increasing popularity and using them to make phone calls or issue voice commands requires the earbud microphones to pick up human speech. When the speaker is in a noisy environment, speech quality degrades significantly and requires speech enhancement (SE). In this paper, we present ClearSpeech, a novel deep-learning-based SE system designed for wireless earbuds. Specifically, by jointly using the earbud's in-ear and out-ear microphones, we devised a suite of techniques to effectively fuse the two signals and enhance the magnitude and phase of the speech spectrogram. We built an earbud prototype to evaluate ClearSpeech under various settings with data collected from 20 subjects. Our results suggest that ClearSpeech can improve the SE performance significantly compared to conventional approaches using the out-ear microphone only. We also show that ClearSpeech can process user speech in real-time on smartphones.
无线耳塞越来越受欢迎,使用它拨打电话或发出语音命令需要耳塞麦克风拾取人的语音。当说话者处于嘈杂环境中时,语音质量会明显下降,因此需要进行语音增强(SE)。在本文中,我们介绍了 ClearSpeech,这是一种基于深度学习的新型 SE 系统,专为无线耳塞设计。具体来说,通过联合使用耳塞的耳内和耳外麦克风,我们设计了一套技术来有效融合这两个信号,并增强语音频谱图的幅度和相位。我们制作了一个耳塞原型,利用从 20 名受试者那里收集的数据,对 ClearSpeech 在各种设置下的效果进行了评估。结果表明,与只使用耳外麦克风的传统方法相比,ClearSpeech 能显著提高 SE 性能。我们还证明 ClearSpeech 可以在智能手机上实时处理用户语音。
{"title":"ClearSpeech","authors":"Dong Ma, Ting Dang, Ming Ding, Rajesh Balan","doi":"10.1145/3631409","DOIUrl":"https://doi.org/10.1145/3631409","url":null,"abstract":"Wireless earbuds have been gaining increasing popularity and using them to make phone calls or issue voice commands requires the earbud microphones to pick up human speech. When the speaker is in a noisy environment, speech quality degrades significantly and requires speech enhancement (SE). In this paper, we present ClearSpeech, a novel deep-learning-based SE system designed for wireless earbuds. Specifically, by jointly using the earbud's in-ear and out-ear microphones, we devised a suite of techniques to effectively fuse the two signals and enhance the magnitude and phase of the speech spectrogram. We built an earbud prototype to evaluate ClearSpeech under various settings with data collected from 20 subjects. Our results suggest that ClearSpeech can improve the SE performance significantly compared to conventional approaches using the out-ear microphone only. We also show that ClearSpeech can process user speech in real-time on smartphones.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"3 6","pages":"1 - 25"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RLoc RLoc
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631437
Tianyu Zhang, Dongheng Zhang, Guanzhong Wang, Yadong Li, Yang Hu, Qibin sun, Yan Chen
In recent years, decimeter-level accuracy in WiFi indoor localization has become attainable within controlled environments. However, existing methods encounter challenges in maintaining robustness in more complex indoor environments: angle-based methods are compromised by the significant localization errors due to unreliable Angle of Arrival (AoA) estimations, and fingerprint-based methods suffer from performance degradation due to environmental changes. In this paper, we propose RLoc, a learning-based system designed for reliable localization and tracking. The key design principle of RLoc lies in quantifying the uncertainty level arises in the AoA estimation task and then exploiting the uncertainty to enhance the reliability of localization and tracking. To this end, RLoc first manually extracts the underutilized beamwidth feature via signal processing techniques. Then, it integrates the uncertainty quantification into neural network design through Kullback-Leibler (KL) divergence loss and ensemble techniques. Finally, these quantified uncertainties guide RLoc to optimally leverage the diversity of Access Points (APs) and the temporal continuous information of AoAs. Our experiments, evaluating on two datasets gathered from commercial off-the-shelf WiFi devices, demonstrate that RLoc surpasses state-of-the-art approaches by an average of 36.27% in in-domain scenarios and 20.40% in cross-domain scenarios.
近年来,在可控环境中,WiFi 室内定位已可达到分米级精度。然而,现有的方法在更复杂的室内环境中保持鲁棒性方面遇到了挑战:基于角度的方法因不可靠的到达角(AoA)估计而导致显著的定位误差,而基于指纹的方法则因环境变化而导致性能下降。在本文中,我们提出了基于学习的 RLoc 系统,旨在实现可靠的定位和跟踪。RLoc 的关键设计原则在于量化在 AoA 估计任务中出现的不确定性水平,然后利用不确定性来提高定位和跟踪的可靠性。为此,RLoc 首先通过信号处理技术手动提取未充分利用的波束宽度特征。然后,它通过库尔巴克-莱布勒(KL)发散损失和集合技术将不确定性量化整合到神经网络设计中。最后,这些量化的不确定性将指导 RLoc 优化利用接入点(AP)的多样性和 AoAs 的时间连续信息。我们在两个从商用现成 WiFi 设备收集的数据集上进行的实验表明,RLoc 在域内场景中平均超越最先进方法 36.27%,在跨域场景中平均超越最先进方法 20.40%。
{"title":"RLoc","authors":"Tianyu Zhang, Dongheng Zhang, Guanzhong Wang, Yadong Li, Yang Hu, Qibin sun, Yan Chen","doi":"10.1145/3631437","DOIUrl":"https://doi.org/10.1145/3631437","url":null,"abstract":"In recent years, decimeter-level accuracy in WiFi indoor localization has become attainable within controlled environments. However, existing methods encounter challenges in maintaining robustness in more complex indoor environments: angle-based methods are compromised by the significant localization errors due to unreliable Angle of Arrival (AoA) estimations, and fingerprint-based methods suffer from performance degradation due to environmental changes. In this paper, we propose RLoc, a learning-based system designed for reliable localization and tracking. The key design principle of RLoc lies in quantifying the uncertainty level arises in the AoA estimation task and then exploiting the uncertainty to enhance the reliability of localization and tracking. To this end, RLoc first manually extracts the underutilized beamwidth feature via signal processing techniques. Then, it integrates the uncertainty quantification into neural network design through Kullback-Leibler (KL) divergence loss and ensemble techniques. Finally, these quantified uncertainties guide RLoc to optimally leverage the diversity of Access Points (APs) and the temporal continuous information of AoAs. Our experiments, evaluating on two datasets gathered from commercial off-the-shelf WiFi devices, demonstrate that RLoc surpasses state-of-the-art approaches by an average of 36.27% in in-domain scenarios and 20.40% in cross-domain scenarios.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"12 3","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1