首页 > 最新文献

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies最新文献

英文 中文
KeyStub 关键存根
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631442
John Nolan, Kun Qian, Xinyu Zhang
The proliferation of the Internet of Things is calling for new modalities that enable human interaction with smart objects. Recent research has explored RFID tags as passive sensors to detect finger touch. However, existing approaches either rely on custom-built RFID readers or are limited to pre-trained finger-swiping gestures. In this paper, we introduce KeyStub, which can discriminate multiple discrete keystrokes on an RFID tag. KeyStub interfaces with commodity RFID ICs with multiple microwave-band resonant stubs as keys. Each stub's geometry is designed to create a predefined impedance mismatch to the RFID IC upon a keystroke, which in turn translates into a known amplitude and phase shift, remotely detectable by an RFID reader. KeyStub combines two ICs' signals through a single common-mode antenna and performs differential detection to evade the need for calibration and ensure reliability in heavy multi-path environments. Our experiments using a commercial-off-the-shelf RFID reader and ICs show that up to 8 buttons can be detected and decoded with accuracy greater than 95%. KeyStub points towards a novel way of using resonant stubs to augment RF antenna structures, thus enabling new passive wireless interaction modalities.
物联网的普及要求采用新的模式来实现人类与智能物体的互动。最近的研究探索了将 RFID 标签作为被动传感器来检测手指触摸。然而,现有的方法要么依赖于定制的 RFID 阅读器,要么仅限于预先训练好的手指滑动手势。在本文中,我们介绍了 KeyStub,它可以分辨 RFID 标签上的多个离散按键。KeyStub 与商品 RFID IC 相连接,以多个微波带谐振存根作为按键。每个谐振块的几何形状都经过设计,可在按键时对 RFID IC 产生预定义的阻抗失配,进而转化为已知的振幅和相移,由 RFID 阅读器远程检测。KeyStub 通过一根共模天线将两个集成电路的信号结合在一起,并进行差分检测,从而避免了校准的需要,确保了在多路径环境下的可靠性。我们使用现成的商用 RFID 阅读器和集成电路进行的实验表明,最多可检测和解码 8 个按钮,准确率超过 95%。KeyStub指出了一种使用谐振存根增强射频天线结构的新方法,从而实现了新的无源无线交互模式。
{"title":"KeyStub","authors":"John Nolan, Kun Qian, Xinyu Zhang","doi":"10.1145/3631442","DOIUrl":"https://doi.org/10.1145/3631442","url":null,"abstract":"The proliferation of the Internet of Things is calling for new modalities that enable human interaction with smart objects. Recent research has explored RFID tags as passive sensors to detect finger touch. However, existing approaches either rely on custom-built RFID readers or are limited to pre-trained finger-swiping gestures. In this paper, we introduce KeyStub, which can discriminate multiple discrete keystrokes on an RFID tag. KeyStub interfaces with commodity RFID ICs with multiple microwave-band resonant stubs as keys. Each stub's geometry is designed to create a predefined impedance mismatch to the RFID IC upon a keystroke, which in turn translates into a known amplitude and phase shift, remotely detectable by an RFID reader. KeyStub combines two ICs' signals through a single common-mode antenna and performs differential detection to evade the need for calibration and ensure reliability in heavy multi-path environments. Our experiments using a commercial-off-the-shelf RFID reader and ICs show that up to 8 buttons can be detected and decoded with accuracy greater than 95%. KeyStub points towards a novel way of using resonant stubs to augment RF antenna structures, thus enabling new passive wireless interaction modalities.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"3 6","pages":"1 - 23"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BodyTouch 身体接触
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631426
Wen-Wei Cheng, Liwei Chan
This paper presents a study on the touch precision of an eye-free, body-based interface using on-body and near-body touch methods with and without skin contact. We evaluate user touch accuracy on four different button layouts. These layouts progressively increase the number of buttons between adjacent body joints, resulting in 12, 20, 28, and 36 touch buttons distributed across the body. Our study indicates that the on-body method achieved an accuracy beyond 95% for the 12- and 20-button layouts, whereas the near-body method only for the 12-button layout. Investigating user touch patterns, we applied SVM classifiers, which boost both the on-body and near-body methods to support up to the 28-button layouts by learning individual touch patterns. However, using generalized touch patterns did not significantly improve accuracy for more complex layouts, highlighting considerable differences in individual touch habits. When evaluating user experience metrics such as workload perception, confidence, convenience, and willingness-to-use, users consistently favored the 20-button layout regardless of the touch technique used. Remarkably, the 20-button layout, when applied to on-body touch methods, does not necessitate personal touch patterns, showcasing an optimal balance of practicality, effectiveness, and user experience without the need for trained models. In contrast, the near-body touch targeting the 20-button layout needs a personalized model; otherwise, the 12-button layout offers the best immediate practicality.
本文介绍了一项关于无眼球、基于身体的界面的触摸精度研究,该界面采用了有皮肤接触和无皮肤接触的身体和近身体触摸方法。我们评估了用户在四种不同按钮布局上的触摸精度。这些布局逐步增加了相邻身体关节之间的按钮数量,最终在整个身体上分布了 12、20、28 和 36 个触摸按钮。我们的研究表明,在 12 和 20 按钮布局中,身体上的方法达到了 95% 以上的准确率,而近身体的方法仅适用于 12 按钮布局。在研究用户触摸模式时,我们应用了 SVM 分类器,通过学习单个触摸模式,提高了身体上和近身体方法对 28 个按钮布局的支持。但是,对于更复杂的布局,使用通用触摸模式并不能显著提高准确性,这凸显了个人触摸习惯的巨大差异。在评估工作量感知、信心、便利性和使用意愿等用户体验指标时,无论使用哪种触摸技术,用户都一致倾向于 20 按钮布局。值得注意的是,当 20 按钮布局应用于身体触摸方法时,并不需要个人触摸模式,从而展示了实用性、有效性和用户体验之间的最佳平衡,而不需要训练有素的模型。与此相反,针对 20 按钮布局的近身触摸需要个性化模型;否则,12 按钮布局的即时实用性最佳。
{"title":"BodyTouch","authors":"Wen-Wei Cheng, Liwei Chan","doi":"10.1145/3631426","DOIUrl":"https://doi.org/10.1145/3631426","url":null,"abstract":"This paper presents a study on the touch precision of an eye-free, body-based interface using on-body and near-body touch methods with and without skin contact. We evaluate user touch accuracy on four different button layouts. These layouts progressively increase the number of buttons between adjacent body joints, resulting in 12, 20, 28, and 36 touch buttons distributed across the body. Our study indicates that the on-body method achieved an accuracy beyond 95% for the 12- and 20-button layouts, whereas the near-body method only for the 12-button layout. Investigating user touch patterns, we applied SVM classifiers, which boost both the on-body and near-body methods to support up to the 28-button layouts by learning individual touch patterns. However, using generalized touch patterns did not significantly improve accuracy for more complex layouts, highlighting considerable differences in individual touch habits. When evaluating user experience metrics such as workload perception, confidence, convenience, and willingness-to-use, users consistently favored the 20-button layout regardless of the touch technique used. Remarkably, the 20-button layout, when applied to on-body touch methods, does not necessitate personal touch patterns, showcasing an optimal balance of practicality, effectiveness, and user experience without the need for trained models. In contrast, the near-body touch targeting the 20-button layout needs a personalized model; otherwise, the 12-button layout offers the best immediate practicality.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"2 2","pages":"1 - 22"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Do I Just Tap My Headset? 我只需轻点耳麦吗?
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631451
Anjali Khurana, Michael Glueck, Parmit K. Chilana
A variety of consumer Augmented Reality (AR) applications have been released on mobile devices and novel immersive headsets over the last five years, creating a breadth of new AR-enabled experiences. However, these applications, particularly those designed for immersive headsets, require users to employ unfamiliar gestural input and adopt novel interaction paradigms. To better understand how everyday users discover gestures and classify the types of interaction challenges they face, we observed how 25 novices from diverse backgrounds and technical knowledge used four different AR applications requiring a range of interaction techniques. A detailed analysis of gesture interaction traces showed that users struggled to discover the correct gestures, with the majority of errors occurring when participants could not determine the correct sequence of actions to perform or could not evaluate their actions. To further reflect on the prevalence of our findings, we carried out an expert validation study with eight professional AR designers, engineers, and researchers. We discuss implications for designing discoverable gestural input techniques that align with users' mental models, inventing AR-specific onboarding and help systems, and enhancing system-level machine recognition.
过去五年来,移动设备和新型沉浸式头戴设备上发布了各种消费类增强现实(AR)应用,创造了大量新的增强现实体验。然而,这些应用,尤其是那些为沉浸式头显设计的应用,需要用户使用陌生的手势输入并采用新颖的交互模式。为了更好地了解日常用户如何发现手势并对他们所面临的交互挑战类型进行分类,我们观察了 25 位来自不同背景和技术知识的新手如何使用四种不同的 AR 应用程序,这些应用程序需要一系列的交互技术。对手势交互痕迹的详细分析显示,用户在发现正确的手势方面困难重重,大多数错误发生在参与者无法确定正确的操作顺序或无法评估自己的操作时。为了进一步反思我们研究结果的普遍性,我们与八位专业 AR 设计师、工程师和研究人员开展了一项专家验证研究。我们讨论了设计符合用户心理模型的可发现手势输入技术、发明AR专用入门和帮助系统以及增强系统级机器识别的意义。
{"title":"Do I Just Tap My Headset?","authors":"Anjali Khurana, Michael Glueck, Parmit K. Chilana","doi":"10.1145/3631451","DOIUrl":"https://doi.org/10.1145/3631451","url":null,"abstract":"A variety of consumer Augmented Reality (AR) applications have been released on mobile devices and novel immersive headsets over the last five years, creating a breadth of new AR-enabled experiences. However, these applications, particularly those designed for immersive headsets, require users to employ unfamiliar gestural input and adopt novel interaction paradigms. To better understand how everyday users discover gestures and classify the types of interaction challenges they face, we observed how 25 novices from diverse backgrounds and technical knowledge used four different AR applications requiring a range of interaction techniques. A detailed analysis of gesture interaction traces showed that users struggled to discover the correct gestures, with the majority of errors occurring when participants could not determine the correct sequence of actions to perform or could not evaluate their actions. To further reflect on the prevalence of our findings, we carried out an expert validation study with eight professional AR designers, engineers, and researchers. We discuss implications for designing discoverable gestural input techniques that align with users' mental models, inventing AR-specific onboarding and help systems, and enhancing system-level machine recognition.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"4 3","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CAvatar CAvatar
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631424
Wenqiang Chen, Yexin Hu, Wei Song, Yingcheng Liu, Antonio Torralba, Wojciech Matusik
Human mesh reconstruction is essential for various applications, including virtual reality, motion capture, sports performance analysis, and healthcare monitoring. In healthcare contexts such as nursing homes, it is crucial to employ plausible and non-invasive methods for human mesh reconstruction that preserve privacy and dignity. Traditional vision-based techniques encounter challenges related to occlusion, viewpoint limitations, lighting conditions, and privacy concerns. In this research, we present CAvatar, a real-time human mesh reconstruction approach that innovatively utilizes pressure maps recorded by a tactile carpet as input. This advanced, non-intrusive technology obviates the need for cameras during usage, thereby safeguarding privacy. Our approach addresses several challenges, such as the limited spatial resolution of tactile sensors, extracting meaningful information from noisy pressure maps, and accommodating user variations and multiple users. We have developed an attention-based deep learning network, complemented by a discriminator network, to predict 3D human pose and shape from 2D pressure maps with notable accuracy. Our model demonstrates promising results, with a mean per joint position error (MPJPE) of 5.89 cm and a per vertex error (PVE) of 6.88 cm. To the best of our knowledge, we are the first to generate 3D mesh of human activities solely using tactile carpet signals, offering a novel approach that addresses privacy concerns and surpasses the limitations of existing vision-based and wearable solutions. The demonstration of CAvatar is shown at https://youtu.be/ZpO3LEsgV7Y.
人体网状结构重建对于各种应用都至关重要,包括虚拟现实、动作捕捉、运动表现分析和医疗保健监测。在养老院等医疗环境中,采用合理、非侵入性的方法进行人体网状结构重建并保护隐私和尊严至关重要。传统的基于视觉的技术会遇到与遮挡、视角限制、照明条件和隐私问题相关的挑战。在这项研究中,我们提出了一种实时人体网状结构重建方法 CAvatar,它创新性地利用触觉地毯记录的压力图作为输入。这种先进的非侵入式技术在使用过程中无需摄像头,从而保护了隐私。我们的方法解决了几个难题,如触觉传感器有限的空间分辨率、从嘈杂的压力图中提取有意义的信息以及适应用户变化和多用户等。我们开发了一个基于注意力的深度学习网络,并辅以一个判别网络,可以从二维压力图中预测三维人体姿势和形状,而且准确度很高。我们的模型取得了可喜的成果,平均每个关节位置误差(MPJPE)为 5.89 厘米,每个顶点误差(PVE)为 6.88 厘米。据我们所知,我们是第一个完全使用触觉地毯信号生成人体活动三维网格的公司,提供了一种解决隐私问题的新方法,超越了现有基于视觉和可穿戴解决方案的局限性。CAvatar 演示见 https://youtu.be/ZpO3LEsgV7Y。
{"title":"CAvatar","authors":"Wenqiang Chen, Yexin Hu, Wei Song, Yingcheng Liu, Antonio Torralba, Wojciech Matusik","doi":"10.1145/3631424","DOIUrl":"https://doi.org/10.1145/3631424","url":null,"abstract":"Human mesh reconstruction is essential for various applications, including virtual reality, motion capture, sports performance analysis, and healthcare monitoring. In healthcare contexts such as nursing homes, it is crucial to employ plausible and non-invasive methods for human mesh reconstruction that preserve privacy and dignity. Traditional vision-based techniques encounter challenges related to occlusion, viewpoint limitations, lighting conditions, and privacy concerns. In this research, we present CAvatar, a real-time human mesh reconstruction approach that innovatively utilizes pressure maps recorded by a tactile carpet as input. This advanced, non-intrusive technology obviates the need for cameras during usage, thereby safeguarding privacy. Our approach addresses several challenges, such as the limited spatial resolution of tactile sensors, extracting meaningful information from noisy pressure maps, and accommodating user variations and multiple users. We have developed an attention-based deep learning network, complemented by a discriminator network, to predict 3D human pose and shape from 2D pressure maps with notable accuracy. Our model demonstrates promising results, with a mean per joint position error (MPJPE) of 5.89 cm and a per vertex error (PVE) of 6.88 cm. To the best of our knowledge, we are the first to generate 3D mesh of human activities solely using tactile carpet signals, offering a novel approach that addresses privacy concerns and surpasses the limitations of existing vision-based and wearable solutions. The demonstration of CAvatar is shown at https://youtu.be/ZpO3LEsgV7Y.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"3 7","pages":"1 - 24"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TextureSight 纹理视图
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631413
Xue Wang, Yang Zhang
Objects engaged by users' hands contain rich contextual information for their strong correlation with user activities. Tools such as toothbrushes and wipes indicate cleansing and sanitation, while mice and keyboards imply work. Much research has been endeavored to sense hand-engaged objects to supply wearables with implicit interactions or ambient computing with personal informatics. We propose TextureSight, a smart-ring sensor that detects hand-engaged objects by detecting their distinctive surface textures using laser speckle imaging on a ring form factor. We conducted a two-day experience sampling study to investigate the unicity and repeatability of the object-texture combinations across routine objects. We grounded our sensing with a theoretical model and simulations, powered it with state-of-the-art deep neural net techniques, and evaluated it with a user study. TextureSight constitutes a valuable addition to the literature for its capability to sense passive objects without emission of EMI or vibration and its elimination of lens for preserving user privacy, leading to a new, practical method for activity recognition and context-aware computing.
用户双手接触的物品包含丰富的上下文信息,因为它们与用户的活动密切相关。牙刷和湿巾等工具表示清洁和卫生,而鼠标和键盘则表示工作。很多研究都致力于感知手部接触的物体,从而为可穿戴设备提供隐式交互,或为环境计算提供个人信息学。我们提出了一种智能环形传感器--TextureSight,它可以通过环形激光斑点成像技术检测手接触物体的独特表面纹理。我们进行了为期两天的经验取样研究,以调查日常物体纹理组合的统一性和可重复性。我们以理论模型和模拟作为传感的基础,利用最先进的深度神经网络技术为其提供动力,并通过用户研究对其进行评估。TextureSight 能够在不产生电磁干扰或振动的情况下感知被动物体,并且消除了保护用户隐私的镜头,从而为活动识别和情境感知计算提供了一种全新的实用方法,是对文献的宝贵补充。
{"title":"TextureSight","authors":"Xue Wang, Yang Zhang","doi":"10.1145/3631413","DOIUrl":"https://doi.org/10.1145/3631413","url":null,"abstract":"Objects engaged by users' hands contain rich contextual information for their strong correlation with user activities. Tools such as toothbrushes and wipes indicate cleansing and sanitation, while mice and keyboards imply work. Much research has been endeavored to sense hand-engaged objects to supply wearables with implicit interactions or ambient computing with personal informatics. We propose TextureSight, a smart-ring sensor that detects hand-engaged objects by detecting their distinctive surface textures using laser speckle imaging on a ring form factor. We conducted a two-day experience sampling study to investigate the unicity and repeatability of the object-texture combinations across routine objects. We grounded our sensing with a theoretical model and simulations, powered it with state-of-the-art deep neural net techniques, and evaluated it with a user study. TextureSight constitutes a valuable addition to the literature for its capability to sense passive objects without emission of EMI or vibration and its elimination of lens for preserving user privacy, leading to a new, practical method for activity recognition and context-aware computing.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"12 6","pages":"1 - 27"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LiqDetector LiqDetector
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631443
Zhu Wang, Yifan Guo, Zhihui Ren, Wenchao Song, Zhuo Sun, Chaoxiong Chen, Bin Guo, Zhiwen Yu
With the advancement of wireless sensing technologies, RF-based contact-less liquid detection attracts more and more attention. Compared with other RF devices, the mmWave radar has the advantages of large bandwidth and low cost. While existing radar-based liquid detection systems demonstrate promising performance, they still have a shortcoming that in the detection result depends on container-related factors (e.g., container placement, container caliber, and container material). In this paper, to enable container-independent liquid detection with a COTS mmWave radar, we propose a dual-reflection model by exploring reflections from different interfaces of the liquid container. Specifically, we design a pair of amplitude ratios based on the signals reflected from different interfaces, and theoretically demonstrate how the refractive index of liquids can be estimated by eliminating the container's impact. To validate the proposed approach, we implement a liquid detection system LiqDetector. Experimental results show that LiqDetector achieves cross-container estimation of the liquid's refractive index with a mean absolute percentage error (MAPE) of about 4.4%. Moreover, the classification accuracies for 6 different liquids and alcohol with different strengths (even a difference of 1%) exceed 96% and 95%, respectively. To the best of our knowledge, this is the first study that achieves container-independent liquid detection based on the COTS mmWave radar by leveraging only one pair of Tx-Rx antennas.
随着无线传感技术的发展,基于射频的非接触式液体检测技术越来越受到人们的关注。与其他射频设备相比,毫米波雷达具有带宽大、成本低的优点。虽然现有的基于雷达的液体检测系统性能良好,但仍存在检测结果取决于容器相关因素(如容器位置、容器口径和容器材料)的缺陷。在本文中,为了利用 COTS mmWave 雷达实现与容器无关的液体检测,我们通过探索液体容器不同界面的反射,提出了一种双反射模型。具体来说,我们根据不同界面反射的信号设计了一对振幅比,并从理论上证明了如何通过消除容器的影响来估计液体的折射率。为了验证所提出的方法,我们实现了液体检测系统 LiqDetector。实验结果表明,LiqDetector 实现了液体折射率的跨容器估计,平均绝对百分比误差(MAPE)约为 4.4%。此外,对 6 种不同强度的液体和酒精(即使相差 1%)的分类准确率分别超过 96% 和 95%。据我们所知,这是首次基于 COTS mmWave 雷达,仅利用一对 Tx-Rx 天线就能实现不受容器影响的液体检测的研究。
{"title":"LiqDetector","authors":"Zhu Wang, Yifan Guo, Zhihui Ren, Wenchao Song, Zhuo Sun, Chaoxiong Chen, Bin Guo, Zhiwen Yu","doi":"10.1145/3631443","DOIUrl":"https://doi.org/10.1145/3631443","url":null,"abstract":"With the advancement of wireless sensing technologies, RF-based contact-less liquid detection attracts more and more attention. Compared with other RF devices, the mmWave radar has the advantages of large bandwidth and low cost. While existing radar-based liquid detection systems demonstrate promising performance, they still have a shortcoming that in the detection result depends on container-related factors (e.g., container placement, container caliber, and container material). In this paper, to enable container-independent liquid detection with a COTS mmWave radar, we propose a dual-reflection model by exploring reflections from different interfaces of the liquid container. Specifically, we design a pair of amplitude ratios based on the signals reflected from different interfaces, and theoretically demonstrate how the refractive index of liquids can be estimated by eliminating the container's impact. To validate the proposed approach, we implement a liquid detection system LiqDetector. Experimental results show that LiqDetector achieves cross-container estimation of the liquid's refractive index with a mean absolute percentage error (MAPE) of about 4.4%. Moreover, the classification accuracies for 6 different liquids and alcohol with different strengths (even a difference of 1%) exceed 96% and 95%, respectively. To the best of our knowledge, this is the first study that achieves container-independent liquid detection based on the COTS mmWave radar by leveraging only one pair of Tx-Rx antennas.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"12 29","pages":"1 - 24"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scribe 抄写员
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631411
Yang Bai, Irtaza Shahid, Harshvardhan Takawale, Nirupam Roy
This paper presents the design and implementation of Scribe, a comprehensive voice processing and handwriting interface for voice assistants. Distinct from prior works, Scribe is a precise tracking interface that can co-exist with the voice interface on low sampling rate voice assistants. Scribe can be used for 3D free-form drawing, writing, and motion tracking for gaming. Taking handwriting as a specific application, it can also capture natural strokes and the individualized style of writing while occupying only a single frequency. The core technique includes an accurate acoustic ranging method called Cross Frequency Continuous Wave (CFCW) sonar, enabling voice assistants to use ultrasound as a ranging signal while using the regular microphone system of voice assistants as a receiver. We also design a new optimization algorithm that only requires a single frequency for time difference of arrival. Scribe prototype achieves 73 μm of median error for 1D ranging and 1.4 mm of median error in 3D tracking of an acoustic beacon using the microphone array used in voice assistants. Our implementation of an in-air handwriting interface achieves 94.1% accuracy with automatic handwriting-to-text software, similar to writing on paper (96.6%). At the same time, the error rate of voice-based user authentication only increases from 6.26% to 8.28%.
本文介绍了 Scribe 的设计和实现,这是一个用于语音助手的综合语音处理和手写界面。与之前的作品不同,Scribe 是一种精确的跟踪界面,可以与低采样率语音助手的语音界面共存。Scribe 可用于三维自由形态绘画、书写和游戏中的动作跟踪。以手写为例,它还可以捕捉自然的笔画和个性化的书写风格,同时只占用一个频率。核心技术包括一种名为 "跨频连续波(CFCW)声纳 "的精确声学测距方法,使语音助手能够使用超声波作为测距信号,同时使用语音助手的常规麦克风系统作为接收器。我们还设计了一种新的优化算法,该算法只需要单一频率的到达时间差。Scribe 原型实现了 73 μm 的一维测距中值误差和 1.4 mm 的声信标三维跟踪中值误差(使用语音助手中使用的麦克风阵列)。我们实现的空中手写界面通过自动手写到文本软件达到了 94.1% 的准确率,与在纸上书写(96.6%)的准确率相似。同时,基于语音的用户身份验证的错误率仅从 6.26% 增加到 8.28%。
{"title":"Scribe","authors":"Yang Bai, Irtaza Shahid, Harshvardhan Takawale, Nirupam Roy","doi":"10.1145/3631411","DOIUrl":"https://doi.org/10.1145/3631411","url":null,"abstract":"This paper presents the design and implementation of Scribe, a comprehensive voice processing and handwriting interface for voice assistants. Distinct from prior works, Scribe is a precise tracking interface that can co-exist with the voice interface on low sampling rate voice assistants. Scribe can be used for 3D free-form drawing, writing, and motion tracking for gaming. Taking handwriting as a specific application, it can also capture natural strokes and the individualized style of writing while occupying only a single frequency. The core technique includes an accurate acoustic ranging method called Cross Frequency Continuous Wave (CFCW) sonar, enabling voice assistants to use ultrasound as a ranging signal while using the regular microphone system of voice assistants as a receiver. We also design a new optimization algorithm that only requires a single frequency for time difference of arrival. Scribe prototype achieves 73 μm of median error for 1D ranging and 1.4 mm of median error in 3D tracking of an acoustic beacon using the microphone array used in voice assistants. Our implementation of an in-air handwriting interface achieves 94.1% accuracy with automatic handwriting-to-text software, similar to writing on paper (96.6%). At the same time, the error rate of voice-based user authentication only increases from 6.26% to 8.28%.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"3 3","pages":"1 - 31"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PmTrack PmTrack
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631433
Hankai Liu, Xiulong Liu, Xin Xie, Xinyu Tong, Keqiu Li
The difficulty in obtaining targets' identity poses a significant obstacle to the pursuit of personalized and customized millimeter-wave (mmWave) sensing. Existing solutions that learn individual differences from signal features have limitations in practical applications. This paper presents a Personalized mmWave-based human Tracking system, PmTrack, by introducing inertial measurement units (IMUs) as identity indicators. Widely available in portable devices such as smartwatches and smartphones, IMUs utilize existing wireless networks for data uploading of identity and data, and are therefore able to assist in radar target identification in a lightweight manner with little deployment and carrying burden for users. PmTrack innovatively adopts orientation as the matching feature, thus well overcoming the data heterogeneity between radar and IMU while avoiding the effect of cumulative errors. In the implementation of PmTrack, we propose a comprehensive set of optimization methods in detection enhancement, interference suppression, continuity maintenance, and trajectory correction, which successfully solved a series of practical problems caused by the three major challenges of weak reflection, point cloud overlap, and body-bounce ghost in multi-person tracking. In addition, an orientation correction method is proposed to overcome the IMU gimbal lock. Extensive experimental results demonstrate that PmTrack achieves an identification accuracy of 98% and 95% with five people in the hall and meeting room, respectively.
难以获得目标身份是实现个性化和定制毫米波(mmWave)传感的一大障碍。从信号特征中学习个体差异的现有解决方案在实际应用中存在局限性。本文通过引入惯性测量单元(IMU)作为身份指示器,提出了基于毫米波的个性化人体跟踪系统 PmTrack。惯性测量单元广泛应用于智能手表和智能手机等便携式设备,利用现有的无线网络上传身份和数据,因此能够以轻便的方式协助雷达目标识别,而且用户的部署和携带负担很小。PmTrack 创新性地采用了方位作为匹配特征,从而很好地克服了雷达和 IMU 之间的数据异质性,同时避免了累积误差的影响。在 PmTrack 的实现过程中,我们在检测增强、干扰抑制、连续性保持和轨迹校正等方面提出了一整套优化方法,成功解决了多人跟踪中弱反射、点云重叠和人体弹跳鬼影三大难题所带来的一系列实际问题。此外,还提出了克服 IMU 万向节锁定的方向校正方法。大量实验结果表明,PmTrack 在大厅和会议室中的五人识别准确率分别达到了 98% 和 95%。
{"title":"PmTrack","authors":"Hankai Liu, Xiulong Liu, Xin Xie, Xinyu Tong, Keqiu Li","doi":"10.1145/3631433","DOIUrl":"https://doi.org/10.1145/3631433","url":null,"abstract":"The difficulty in obtaining targets' identity poses a significant obstacle to the pursuit of personalized and customized millimeter-wave (mmWave) sensing. Existing solutions that learn individual differences from signal features have limitations in practical applications. This paper presents a Personalized mmWave-based human Tracking system, PmTrack, by introducing inertial measurement units (IMUs) as identity indicators. Widely available in portable devices such as smartwatches and smartphones, IMUs utilize existing wireless networks for data uploading of identity and data, and are therefore able to assist in radar target identification in a lightweight manner with little deployment and carrying burden for users. PmTrack innovatively adopts orientation as the matching feature, thus well overcoming the data heterogeneity between radar and IMU while avoiding the effect of cumulative errors. In the implementation of PmTrack, we propose a comprehensive set of optimization methods in detection enhancement, interference suppression, continuity maintenance, and trajectory correction, which successfully solved a series of practical problems caused by the three major challenges of weak reflection, point cloud overlap, and body-bounce ghost in multi-person tracking. In addition, an orientation correction method is proposed to overcome the IMU gimbal lock. Extensive experimental results demonstrate that PmTrack achieves an identification accuracy of 98% and 95% with five people in the hall and meeting room, respectively.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"3 8","pages":"1 - 30"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AdaStreamLite AdaStreamLite
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631460
Yuheng Wei, Jie Xiong, Hui Liu, Yingtao Yu, Jiangtao Pan, Junzhao Du
Streaming speech recognition aims to transcribe speech to text in a streaming manner, providing real-time speech interaction for smartphone users. However, it is not trivial to develop a high-performance streaming speech recognition system purely running on mobile platforms, due to the complex real-world acoustic environments and the limited computational resources of smartphones. Most existing solutions lack the generalization to unseen environments and have difficulty to work with streaming speech. In this paper, we design AdaStreamLite, an environment-adaptive streaming speech recognition tool for smartphones. AdaStreamLite interacts with its surroundings to capture the characteristics of the current acoustic environment to improve the robustness against ambient noise in a lightweight manner. We design an environment representation extractor to model acoustic environments with compact feature vectors, and construct a representation lookup table to improve the generalization of AdaStreamLite to unseen environments. We train our system using large speech datasets publicly available covering different languages. We conduct experiments in a large range of real acoustic environments with different smartphones. The results show that AdaStreamLite outperforms the state-of-the-art methods in terms of recognition accuracy, computational resource consumption and robustness against unseen environments.
流式语音识别旨在以流式方式将语音转录为文本,为智能手机用户提供实时语音交互。然而,由于现实世界的声学环境复杂,智能手机的计算资源有限,要开发一个纯粹在移动平台上运行的高性能流式语音识别系统并非易事。大多数现有解决方案缺乏对未知环境的泛化能力,并且难以处理流式语音。在本文中,我们设计了 AdaStreamLite,一种用于智能手机的环境适应型流式语音识别工具。AdaStreamLite 可与周围环境互动,捕捉当前声学环境的特征,从而以轻量级方式提高对环境噪声的鲁棒性。我们设计了一个环境表征提取器,用紧凑的特征向量对声学环境进行建模,并构建了一个表征查找表,以提高 AdaStreamLite 对未知环境的泛化能力。我们使用公开的涵盖不同语言的大型语音数据集来训练我们的系统。我们使用不同的智能手机在大量真实的声学环境中进行了实验。结果表明,AdaStreamLite 在识别准确率、计算资源消耗和对未知环境的鲁棒性方面都优于最先进的方法。
{"title":"AdaStreamLite","authors":"Yuheng Wei, Jie Xiong, Hui Liu, Yingtao Yu, Jiangtao Pan, Junzhao Du","doi":"10.1145/3631460","DOIUrl":"https://doi.org/10.1145/3631460","url":null,"abstract":"Streaming speech recognition aims to transcribe speech to text in a streaming manner, providing real-time speech interaction for smartphone users. However, it is not trivial to develop a high-performance streaming speech recognition system purely running on mobile platforms, due to the complex real-world acoustic environments and the limited computational resources of smartphones. Most existing solutions lack the generalization to unseen environments and have difficulty to work with streaming speech. In this paper, we design AdaStreamLite, an environment-adaptive streaming speech recognition tool for smartphones. AdaStreamLite interacts with its surroundings to capture the characteristics of the current acoustic environment to improve the robustness against ambient noise in a lightweight manner. We design an environment representation extractor to model acoustic environments with compact feature vectors, and construct a representation lookup table to improve the generalization of AdaStreamLite to unseen environments. We train our system using large speech datasets publicly available covering different languages. We conduct experiments in a large range of real acoustic environments with different smartphones. The results show that AdaStreamLite outperforms the state-of-the-art methods in terms of recognition accuracy, computational resource consumption and robustness against unseen environments.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"11 22","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JoulesEye 焦耳之眼
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631422
Rishiraj Adhikary, M. Sadeh, N. Batra, Mayank Goel
Smartphones and smartwatches have contributed significantly to fitness monitoring by providing real-time statistics, thanks to accurate tracking of physiological indices such as heart rate. However, the estimation of calories burned during exercise is inaccurate and cannot be used for medical diagnosis. In this work, we present JoulesEye, a smartphone thermal camera-based system that can accurately estimate calorie burn by monitoring respiration rate. We evaluated JoulesEye on 54 participants who performed high intensity cycling and running. The mean absolute percentage error (MAPE) of JoulesEye was 5.8%, which is significantly better than the MAPE of 37.6% observed with commercial smartwatch-based methods that only use heart rate. Finally, we show that an ultra-low-resolution thermal camera that is small enough to fit inside a watch or other wearables is sufficient for accurate calorie burn estimation. These results suggest that JoulesEye is a promising new method for accurate and reliable calorie burn estimation.
智能手机和智能手表通过准确跟踪心率等生理指标,提供实时统计数据,为健身监测做出了巨大贡献。然而,对运动过程中消耗的卡路里的估算并不准确,不能用于医疗诊断。在这项工作中,我们介绍了基于智能手机热像仪的 JoulesEye 系统,该系统可通过监测呼吸频率准确估算卡路里消耗量。我们对 54 名进行高强度自行车和跑步运动的参与者进行了 JoulesEye 评估。JoulesEye 的平均绝对百分比误差 (MAPE) 为 5.8%,明显优于仅使用心率的商用智能手表方法的 37.6%。最后,我们表明,小巧的超低分辨率红外热像仪足以安装在手表或其他可穿戴设备中,用于准确估算卡路里消耗量。这些结果表明,JoulesEye 是一种有前途的新方法,可用于准确可靠的卡路里消耗估算。
{"title":"JoulesEye","authors":"Rishiraj Adhikary, M. Sadeh, N. Batra, Mayank Goel","doi":"10.1145/3631422","DOIUrl":"https://doi.org/10.1145/3631422","url":null,"abstract":"Smartphones and smartwatches have contributed significantly to fitness monitoring by providing real-time statistics, thanks to accurate tracking of physiological indices such as heart rate. However, the estimation of calories burned during exercise is inaccurate and cannot be used for medical diagnosis. In this work, we present JoulesEye, a smartphone thermal camera-based system that can accurately estimate calorie burn by monitoring respiration rate. We evaluated JoulesEye on 54 participants who performed high intensity cycling and running. The mean absolute percentage error (MAPE) of JoulesEye was 5.8%, which is significantly better than the MAPE of 37.6% observed with commercial smartwatch-based methods that only use heart rate. Finally, we show that an ultra-low-resolution thermal camera that is small enough to fit inside a watch or other wearables is sufficient for accurate calorie burn estimation. These results suggest that JoulesEye is a promising new method for accurate and reliable calorie burn estimation.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"10 51","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1