首页 > 最新文献

Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.最新文献

英文 中文
PackquID: In-packet Liquid Identification Using RF Signals PackquID:包内液体识别使用射频信号
Pub Date : 2022-01-01 DOI: 10.1145/3569469
Fei Shang, Panlong Yang, Yubo Yan, Xiangyang Li
There are many scenarios where the liquid is occluded by other items ( e.g
在许多情况下,液体被其他物品(例如液体)堵塞
{"title":"PackquID: In-packet Liquid Identification Using RF Signals","authors":"Fei Shang, Panlong Yang, Yubo Yan, Xiangyang Li","doi":"10.1145/3569469","DOIUrl":"https://doi.org/10.1145/3569469","url":null,"abstract":"There are many scenarios where the liquid is occluded by other items ( e.g","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"21 1","pages":"181:1-181:27"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73214010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SpARklingPaper: Enhancing Common Pen- And Paper-Based Handwriting Training for Children by Digitally Augmenting Papers Using a Tablet Screen SpARklingPaper:通过使用平板电脑屏幕数字增强纸张,增强儿童的普通笔和纸手写训练
Pub Date : 2022-01-01 DOI: 10.1145/3550337
T. Drey, Jessica Janek, Josef Lang, Dietmar Puschmann, Michael Rietzler, E. Rukzio
Educational apps support learning, but handwriting training is still based on analog pen- and paper. However, training handwriting with apps can negatively affect graphomotor handwriting skills due to the different haptic feedback of the tablet, stylus, or finger compared to pen and paper. With SpARklingPaper, we are the first to combine the genuine haptic feedback of analog pen and paper with the digital support of apps. Our artifact contribution enables children to write with any pen on a standard paper placed on a tablet’s screen, augmenting the paper from below, showing animated letters and individual feedback. We conducted two online surveys with overall 29 parents and teachers of elementary school pupils and a user study with 13 children and 13 parents for evaluation. Our results show the importance of the genuine analog haptic feedback combined with the augmentation of SpARklingPaper. It was rated superior compared to our stylus baseline condition regarding pen-handling, writing training-success, motivation, and overall impression. SpARklingPaper can be a blueprint for high-fidelity haptic feedback handwriting training systems.
教育应用程序支持学习,但手写训练仍然基于模拟笔和纸。然而,与笔和纸相比,平板电脑、触控笔或手指的触觉反馈不同,因此用应用程序训练手写可能会对手写运动技能产生负面影响。通过SpARklingPaper,我们率先将模拟笔和纸的真实触觉反馈与应用程序的数字支持结合起来。我们的人工制品使孩子们能够用任何一支笔在平板电脑屏幕上的标准纸上写字,从下面放大纸,显示动画字母和个人反馈。我们对29名小学生家长和老师进行了两次在线调查,并对13名儿童和13名家长进行了用户研究以进行评估。我们的结果显示了真实的模拟触觉反馈与增强的SpARklingPaper的重要性。与我们的触控笔基线条件相比,它在笔操作、写作训练成功、动机和整体印象方面被评为优越。SpARklingPaper可以成为高保真触觉反馈手写训练系统的蓝图。
{"title":"SpARklingPaper: Enhancing Common Pen- And Paper-Based Handwriting Training for Children by Digitally Augmenting Papers Using a Tablet Screen","authors":"T. Drey, Jessica Janek, Josef Lang, Dietmar Puschmann, Michael Rietzler, E. Rukzio","doi":"10.1145/3550337","DOIUrl":"https://doi.org/10.1145/3550337","url":null,"abstract":"Educational apps support learning, but handwriting training is still based on analog pen- and paper. However, training handwriting with apps can negatively affect graphomotor handwriting skills due to the different haptic feedback of the tablet, stylus, or finger compared to pen and paper. With SpARklingPaper, we are the first to combine the genuine haptic feedback of analog pen and paper with the digital support of apps. Our artifact contribution enables children to write with any pen on a standard paper placed on a tablet’s screen, augmenting the paper from below, showing animated letters and individual feedback. We conducted two online surveys with overall 29 parents and teachers of elementary school pupils and a user study with 13 children and 13 parents for evaluation. Our results show the importance of the genuine analog haptic feedback combined with the augmentation of SpARklingPaper. It was rated superior compared to our stylus baseline condition regarding pen-handling, writing training-success, motivation, and overall impression. SpARklingPaper can be a blueprint for high-fidelity haptic feedback handwriting training systems.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"102 1","pages":"113:1-113:29"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73864905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DeXAR: Deep Explainable Sensor-Based Activity Recognition in Smart-Home Environments 智能家居环境中基于深度可解释传感器的活动识别
Pub Date : 2022-01-01 DOI: 10.1145/3517224
Luca Arrotta, Gabriele Civitarese, C. Bettini
The sensor-based recognition of Activities of Daily Living (ADLs) in smart-home environments is an active research area, with relevant applications in healthcare and ambient assisted living. The application of Explainable Artificial Intelligence (XAI) to ADLs recognition has the potential of making this process trusted, transparent and understandable. The few works that investigated this problem considered only interpretable machine learning models. In this work, we propose DeXAR, a novel methodology to transform sensor data into semantic images to take advantage of XAI methods based on Convolutional Neural Networks (CNN). We apply different XAI approaches for deep learning and, from the resulting heat maps, we generate explanations in natural language. In order to identify the most effective XAI method, we performed extensive experiments on two different datasets, with both a common-knowledge and a user-based evaluation. The results of a user study show that the white-box XAI method based on prototypes is the most effective.
智能家居环境中基于传感器的日常生活活动(ADLs)识别是一个活跃的研究领域,在医疗保健和环境辅助生活中有着相关的应用。可解释人工智能(XAI)在adl识别中的应用有可能使这一过程可信、透明和可理解。研究这个问题的少数作品只考虑了可解释的机器学习模型。在这项工作中,我们提出了DeXAR,一种利用基于卷积神经网络(CNN)的XAI方法将传感器数据转换为语义图像的新方法。我们将不同的XAI方法应用于深度学习,并从产生的热图中生成自然语言的解释。为了确定最有效的XAI方法,我们在两个不同的数据集上进行了广泛的实验,包括常识和基于用户的评估。用户研究结果表明,基于原型的白盒XAI方法是最有效的。
{"title":"DeXAR: Deep Explainable Sensor-Based Activity Recognition in Smart-Home Environments","authors":"Luca Arrotta, Gabriele Civitarese, C. Bettini","doi":"10.1145/3517224","DOIUrl":"https://doi.org/10.1145/3517224","url":null,"abstract":"The sensor-based recognition of Activities of Daily Living (ADLs) in smart-home environments is an active research area, with relevant applications in healthcare and ambient assisted living. The application of Explainable Artificial Intelligence (XAI) to ADLs recognition has the potential of making this process trusted, transparent and understandable. The few works that investigated this problem considered only interpretable machine learning models. In this work, we propose DeXAR, a novel methodology to transform sensor data into semantic images to take advantage of XAI methods based on Convolutional Neural Networks (CNN). We apply different XAI approaches for deep learning and, from the resulting heat maps, we generate explanations in natural language. In order to identify the most effective XAI method, we performed extensive experiments on two different datasets, with both a common-knowledge and a user-based evaluation. The results of a user study show that the white-box XAI method based on prototypes is the most effective.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"14 1","pages":"1:1-1:30"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81979148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
HearFire: Indoor Fire Detection via Inaudible Acoustic Sensing HearFire:通过听不见的声音感应进行室内火灾探测
Pub Date : 2022-01-01 DOI: 10.1145/3569500
Z. Wang
Indoor conflagration causes a large number of casualties and property losses worldwide every year. Yet existing indoor fire detection systems either suffer from short sensing range (e.g., ≤ 0.5m using a thermometer), susceptible to interferences (e.g., smoke detector) or high computational and deployment overhead (e.g., cameras, Wi-Fi). This paper proposes HearFire, a cost-effective, easy-to-use and timely room-scale fire detection system via acoustic sensing. HearFire consists of a collocated commodity speaker and microphone pair, which remotely senses fire by emitting inaudible sound waves. Unlike existing works that use signal reflection effect to fulfill acoustic sensing tasks, HearFire leverages sound absorption and sound speed variations to sense the fire due to unique physical properties of flame. Through a deep analysis of sound transmission, HearFire effectively achieves room-scale sensing by correlating the relationship between the transmission signal length and sensing distance. The transmission frame is carefully selected to expand sensing range and balance a series of practical factors that impact the system’s performance. We further design a simple yet effective approach to remove the environmental interference caused by signal reflection by conducting a deep investigation into channel differences between sound reflection and sound absorption. Specifically, sound reflection results in a much more stable pattern in terms of signal energy than sound absorption, which can be exploited to differentiate the channel measurements caused by fire from other interferences. Extensive experiments demonstrate that HireFire enables a maximum 7m sensing range and achieves timely fire detection in indoor environments with up to 99 . 2% accuracy under different experiment configurations.
室内火灾每年在世界范围内造成大量人员伤亡和财产损失。然而,现有的室内火灾探测系统要么传感距离短(例如,使用温度计≤0.5m),易受干扰(例如,烟雾探测器),要么计算和部署开销高(例如,摄像头,Wi-Fi)。本文提出了一种具有成本效益、易于使用和及时的室内火灾探测系统——HearFire。HearFire由一对商用扬声器和麦克风组合而成,通过发射听不见的声波来远程感知火灾。与现有的利用信号反射效应来完成声传感任务不同,由于火焰独特的物理性质,HearFire利用吸声和声速变化来感知火焰。通过对声音传输的深入分析,HearFire通过将传输信号长度与传感距离之间的关系进行关联,有效地实现了房间尺度的传感。传输框架经过精心选择,以扩大传感范围,并平衡一系列影响系统性能的实际因素。我们进一步设计了一种简单而有效的方法,通过深入研究声反射和声吸收之间的通道差异来消除信号反射引起的环境干扰。具体来说,声反射在信号能量方面比声吸收产生更稳定的模式,这可以用来区分由火灾引起的通道测量和其他干扰。大量的实验表明,HireFire能够实现最大7m的传感距离,并在室内环境中实现高达99的及时火灾探测。不同实验配置下精度为2%。
{"title":"HearFire: Indoor Fire Detection via Inaudible Acoustic Sensing","authors":"Z. Wang","doi":"10.1145/3569500","DOIUrl":"https://doi.org/10.1145/3569500","url":null,"abstract":"Indoor conflagration causes a large number of casualties and property losses worldwide every year. Yet existing indoor fire detection systems either suffer from short sensing range (e.g., ≤ 0.5m using a thermometer), susceptible to interferences (e.g., smoke detector) or high computational and deployment overhead (e.g., cameras, Wi-Fi). This paper proposes HearFire, a cost-effective, easy-to-use and timely room-scale fire detection system via acoustic sensing. HearFire consists of a collocated commodity speaker and microphone pair, which remotely senses fire by emitting inaudible sound waves. Unlike existing works that use signal reflection effect to fulfill acoustic sensing tasks, HearFire leverages sound absorption and sound speed variations to sense the fire due to unique physical properties of flame. Through a deep analysis of sound transmission, HearFire effectively achieves room-scale sensing by correlating the relationship between the transmission signal length and sensing distance. The transmission frame is carefully selected to expand sensing range and balance a series of practical factors that impact the system’s performance. We further design a simple yet effective approach to remove the environmental interference caused by signal reflection by conducting a deep investigation into channel differences between sound reflection and sound absorption. Specifically, sound reflection results in a much more stable pattern in terms of signal energy than sound absorption, which can be exploited to differentiate the channel measurements caused by fire from other interferences. Extensive experiments demonstrate that HireFire enables a maximum 7m sensing range and achieves timely fire detection in indoor environments with up to 99 . 2% accuracy under different experiment configurations.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"28 1","pages":"185:1-185:25"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78973624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
HIPPO: Pervasive Hand-Grip Estimation from Everyday Interactions 河马:从日常互动中估计普遍的握力
Pub Date : 2022-01-01 DOI: 10.1145/3570344
Zhigang Yin, M. Liyanage, Abdul-Rasheed Ottun, Souvik Paul, Agustin Zuniga, P. Nurmi, Huber Flores
Hand-grip strength is widely used to estimate muscle strength and it serves as a general indicator of the overall health of a person, particularly in aging adults. Hand-grip strength is typically estimated using dynamometers or specialized force resistant pressure sensors embedded onto objects. Both of these solutions require the user to interact with a dedicated measurement device which unnecessarily restricts the contexts where estimates are acquired. We contribute HIPPO, a novel non-intrusive and opportunistic method for estimating hand-grip strength from everyday interactions with objects. HIPPO re-purposes light sensors available in wearables (e.g., rings or gloves) to capture changes in light reflectivity when people interact with objects. This allows HIPPO to non-intrusively piggyback everyday interactions for health information without affecting the user’s everyday routines. We present two prototypes integrating HIPPO, an early smart glove proof-of-concept, and a further optimized solution that uses sensors integrated onto a ring. We validate HIPPO through extensive experiments and compare HIPPO against three baselines, including a clinical dynamometer. Our results show that HIPPO operates robustly across a wide range of everyday objects, and participants. The force strength estimates correlate with estimates produced by pressure-based devices, and can also determine the correct hand grip strength category with up to 86% accuracy. Our findings also suggest that users prefer our approach to existing solutions as HIPPO blends the estimation with everyday interactions.
握力被广泛用于估计肌肉力量,它是一个人的整体健康状况的一般指标,特别是在老年人中。手握强度通常是用测力计或嵌入物体上的专用抗力压力传感器来估计的。这两种解决方案都需要用户与专用的测量设备进行交互,这不必要地限制了获取估算的上下文。我们贡献了HIPPO,一种新颖的非侵入性和机会主义方法,用于估计日常与物体的相互作用中的手握力。HIPPO重新利用了可穿戴设备(如戒指或手套)中的光传感器,以捕捉人们与物体互动时光反射率的变化。这使得HIPPO可以在不影响用户日常生活的情况下,非侵入性地进行健康信息的日常交互。我们展示了两种集成HIPPO的原型,一种早期的智能手套概念验证,以及一种进一步优化的解决方案,该解决方案使用集成在戒指上的传感器。我们通过广泛的实验验证HIPPO,并将HIPPO与三条基线进行比较,包括临床测力计。我们的研究结果表明,HIPPO在广泛的日常物品和参与者中运行稳健。力强度估计值与基于压力的设备产生的估计值相关,并且还可以确定正确的手握力类别,准确率高达86%。我们的研究结果还表明,用户更喜欢我们的方法,而不是现有的解决方案,因为HIPPO将估计与日常交互混合在一起。
{"title":"HIPPO: Pervasive Hand-Grip Estimation from Everyday Interactions","authors":"Zhigang Yin, M. Liyanage, Abdul-Rasheed Ottun, Souvik Paul, Agustin Zuniga, P. Nurmi, Huber Flores","doi":"10.1145/3570344","DOIUrl":"https://doi.org/10.1145/3570344","url":null,"abstract":"Hand-grip strength is widely used to estimate muscle strength and it serves as a general indicator of the overall health of a person, particularly in aging adults. Hand-grip strength is typically estimated using dynamometers or specialized force resistant pressure sensors embedded onto objects. Both of these solutions require the user to interact with a dedicated measurement device which unnecessarily restricts the contexts where estimates are acquired. We contribute HIPPO, a novel non-intrusive and opportunistic method for estimating hand-grip strength from everyday interactions with objects. HIPPO re-purposes light sensors available in wearables (e.g., rings or gloves) to capture changes in light reflectivity when people interact with objects. This allows HIPPO to non-intrusively piggyback everyday interactions for health information without affecting the user’s everyday routines. We present two prototypes integrating HIPPO, an early smart glove proof-of-concept, and a further optimized solution that uses sensors integrated onto a ring. We validate HIPPO through extensive experiments and compare HIPPO against three baselines, including a clinical dynamometer. Our results show that HIPPO operates robustly across a wide range of everyday objects, and participants. The force strength estimates correlate with estimates produced by pressure-based devices, and can also determine the correct hand grip strength category with up to 86% accuracy. Our findings also suggest that users prefer our approach to existing solutions as HIPPO blends the estimation with everyday interactions.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"61 1","pages":"209:1-209:30"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74486798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Shoes++: A Smart Detachable Sole for Social Foot-to-foot Interaction 鞋++:一种智能的可拆卸鞋底,用于社交脚对脚的互动
Pub Date : 2022-01-01 DOI: 10.1145/3534620
Zihan Yan, Jiayi Zhou, Wu Yufei, Guanhong Liu, Danli Luo, Zi Zhou, Mi Haipeng, Lingyun Sun, Xiang 'Anthony' Chen, Yang Zhang, Guanyun Wang
Feet are the foundation of our bodies that not only perform locomotion but also participate in intent and emotion expression. Thus, foot gestures are an intuitive and natural form of expression for interpersonal interaction. Recent studies have mostly introduced smart shoes as personal gadgets, while foot gestures used in multi-person foot interactions in social scenarios remain largely unexplored. We present Shoes++, which includes an inertial measurement unit (IMU)-mounted sole and an input vocabulary of social foot-to-foot gestures to support foot-based interaction. The gesture vocabulary is derived and condensed by a set of gestures elicited from a participatory design session with 12 users. We implement a machine learning model in Shoes++ which can recognize two-person and three-person social foot-to-foot gestures with 94.3% and 96.6% accuracies (N=18). In addition, the sole is designed to easily attach to and detach from various daily shoes to support comfortable social foot interaction without taking off the shoes. Based on users’ qualitative feedback, we also found that Shoes++ can support team collaboration and enhance emotion expression, thus making social interactions or interpersonal dynamics more engaging in an expanded design space. Additional Key and smart sole Shoes++: A Smart Detachable Sole for Social Foot-to-foot Interaction. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 2, (June 2022),
脚是我们身体的基础,不仅可以进行运动,还可以参与意图和情感表达。因此,在人际交往中,脚的手势是一种直观、自然的表达方式。最近的研究主要是将智能鞋作为个人小工具,而在社交场景中多人脚互动中使用的脚手势仍未得到充分研究。我们展示了鞋++,它包括一个安装在鞋底的惯性测量单元(IMU)和一个社交脚对脚手势的输入词汇表,以支持基于脚的交互。手势词汇是由12个用户参与设计会议中产生的一组手势派生和浓缩而成的。我们在鞋++中实现了一个机器学习模型,该模型可以识别两人和三人的社交脚对脚手势,准确率分别为94.3%和96.6% (N=18)。此外,鞋底的设计可以轻松地与各种日常鞋子连接和分离,以支持舒适的社交足部互动,而无需脱下鞋子。基于用户的定性反馈,我们还发现鞋++可以支持团队协作,增强情感表达,从而使社会互动或人际动态在扩大的设计空间中更具吸引力。额外的关键和智能鞋底鞋++:一个智能的可拆卸鞋底的社会脚对脚的互动。ACM交互过程。暴徒。可穿戴无处不在的技术,6,2,(2022年6月),
{"title":"Shoes++: A Smart Detachable Sole for Social Foot-to-foot Interaction","authors":"Zihan Yan, Jiayi Zhou, Wu Yufei, Guanhong Liu, Danli Luo, Zi Zhou, Mi Haipeng, Lingyun Sun, Xiang 'Anthony' Chen, Yang Zhang, Guanyun Wang","doi":"10.1145/3534620","DOIUrl":"https://doi.org/10.1145/3534620","url":null,"abstract":"Feet are the foundation of our bodies that not only perform locomotion but also participate in intent and emotion expression. Thus, foot gestures are an intuitive and natural form of expression for interpersonal interaction. Recent studies have mostly introduced smart shoes as personal gadgets, while foot gestures used in multi-person foot interactions in social scenarios remain largely unexplored. We present Shoes++, which includes an inertial measurement unit (IMU)-mounted sole and an input vocabulary of social foot-to-foot gestures to support foot-based interaction. The gesture vocabulary is derived and condensed by a set of gestures elicited from a participatory design session with 12 users. We implement a machine learning model in Shoes++ which can recognize two-person and three-person social foot-to-foot gestures with 94.3% and 96.6% accuracies (N=18). In addition, the sole is designed to easily attach to and detach from various daily shoes to support comfortable social foot interaction without taking off the shoes. Based on users’ qualitative feedback, we also found that Shoes++ can support team collaboration and enhance emotion expression, thus making social interactions or interpersonal dynamics more engaging in an expanded design space. Additional Key and smart sole Shoes++: A Smart Detachable Sole for Social Foot-to-foot Interaction. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 2, (June 2022),","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"8 1","pages":"85:1-85:29"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74092177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
StretchAR: Exploiting Touch and Stretch as a Method of Interaction for Smart Glasses Using Wearable Straps StretchAR:利用触摸和拉伸作为使用可穿戴带的智能眼镜的交互方法
Pub Date : 2022-01-01 DOI: 10.1145/3550305
Luis Paredes, Ananya Ipsita, J. C. Mesa, Ramses V. Martinez Garrido, K. Ramani
presents StretchAR, wearable straps that exploit touch and stretch as input modalities to interact with the virtual content displayed on smart glasses. StretchAR straps are thin, lightweight, and can be attached to existing garments to enhance users’ interactions in AR. StretchAR straps can withstand strains up to 190% while remaining sensitive to touch inputs. The strap allows the effective combination of these inputs as a mode of interaction with the content displayed through AR widgets, maps, menus, social media, and Internet of Things (IoT) devices. Furthermore, we conducted a user study with 15 participants to determine the potential implications of the use of StretchAR as input modalities when placed on four different body locations (head, chest, forearm, and wrist). This study reveals that StretchAR can be used as an efficient and convenient input modality for smart glasses with a 96% accuracy. Additionally, we provide a collection of 28 interactions enabled by the simultaneous touch–stretch capabilities of StretchAR. Finally, we facilitate recommendation guidelines for the design, fabrication, placement, and possible applications of StretchAR as an interaction modality for AR content displayed on smart glasses. Exploiting as
推出了StretchAR,一种可穿戴带,利用触摸和拉伸作为输入方式,与智能眼镜上显示的虚拟内容进行交互。StretchAR肩带薄而轻,可以附着在现有的服装上,增强用户在AR中的互动。StretchAR肩带可以承受高达190%的张力,同时对触摸输入保持敏感。这条表带可以有效地结合这些输入,作为一种与AR小部件、地图、菜单、社交媒体和物联网(IoT)设备显示的内容交互的模式。此外,我们对15名参与者进行了一项用户研究,以确定将StretchAR作为输入方式放置在四个不同的身体部位(头部、胸部、前臂和手腕)时的潜在影响。这项研究表明,StretchAR可以作为一种高效方便的智能眼镜输入方式,准确率达到96%。此外,我们还提供了由StretchAR的同时触摸拉伸功能启用的28种交互的集合。最后,我们为StretchAR作为智能眼镜上显示AR内容的交互方式的设计、制造、放置和可能的应用提供了推荐指南。利用作为
{"title":"StretchAR: Exploiting Touch and Stretch as a Method of Interaction for Smart Glasses Using Wearable Straps","authors":"Luis Paredes, Ananya Ipsita, J. C. Mesa, Ramses V. Martinez Garrido, K. Ramani","doi":"10.1145/3550305","DOIUrl":"https://doi.org/10.1145/3550305","url":null,"abstract":"presents StretchAR, wearable straps that exploit touch and stretch as input modalities to interact with the virtual content displayed on smart glasses. StretchAR straps are thin, lightweight, and can be attached to existing garments to enhance users’ interactions in AR. StretchAR straps can withstand strains up to 190% while remaining sensitive to touch inputs. The strap allows the effective combination of these inputs as a mode of interaction with the content displayed through AR widgets, maps, menus, social media, and Internet of Things (IoT) devices. Furthermore, we conducted a user study with 15 participants to determine the potential implications of the use of StretchAR as input modalities when placed on four different body locations (head, chest, forearm, and wrist). This study reveals that StretchAR can be used as an efficient and convenient input modality for smart glasses with a 96% accuracy. Additionally, we provide a collection of 28 interactions enabled by the simultaneous touch–stretch capabilities of StretchAR. Finally, we facilitate recommendation guidelines for the design, fabrication, placement, and possible applications of StretchAR as an interaction modality for AR content displayed on smart glasses. Exploiting as","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"9 1","pages":"134:1-134:26"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76663092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
BLEselect: Gestural IoT Device Selection via Bluetooth Angle of Arrival Estimation from Smart Glasses BLEselect:通过智能眼镜的蓝牙到达角度估计进行手势物联网设备选择
Pub Date : 2022-01-01 DOI: 10.1145/3569482
Tengxiang Zhang, Zitong Lan, Chenren Xu, Yanrong Li, Yiqiang Chen
Spontaneous selection of IoT devices from the head-mounted device is key for user-centered pervasive interaction. BLEselect enables users to select an unmodified Bluetooth 5.1 compatible IoT device by nodding at, pointing at, or drawing a circle in the air around it. We designed a compact antenna array that fits on a pair of smart glasses to estimate the Angle of Arrival (AoA) of IoT and wrist-worn devices’ advertising signals. We then developed a sensing pipeline that supports all three selection gestures with lightweight machine learning models, which are trained in real-time for both hand gestures. Extensive characterizations and evaluations show that our system is accurate, natural, low-power, and privacy-preserving. Despite the small effective size of the antenna array, our system achieves a higher than 90% selection accuracy within a 3 meters distance in front of the user. In a user study that mimics real-life usage cases, the overall selection accuracy is 96.7% for a diverse set of 22 participants in terms of age, technology savviness, and body structures.
从头戴式设备中自发选择物联网设备是实现以用户为中心的无处不在的交互的关键。BLEselect允许用户通过点头、指向或在周围的空气中画圈来选择未修改的蓝牙5.1兼容物联网设备。我们设计了一种紧凑型天线阵列,可以安装在一副智能眼镜上,用于估计物联网的到达角(AoA)和腕带设备的广告信号。然后,我们开发了一个传感管道,支持所有三种选择手势,使用轻量级机器学习模型,这些模型可以实时训练两种手势。广泛的特征和评估表明,我们的系统是准确的、自然的、低功耗的和隐私保护的。尽管天线阵列的有效尺寸很小,但我们的系统在用户面前3米距离内的选择精度高于90%。在一项模拟现实生活用例的用户研究中,根据年龄、技术熟练程度和身体结构,22名不同的参与者的总体选择准确率为96.7%。
{"title":"BLEselect: Gestural IoT Device Selection via Bluetooth Angle of Arrival Estimation from Smart Glasses","authors":"Tengxiang Zhang, Zitong Lan, Chenren Xu, Yanrong Li, Yiqiang Chen","doi":"10.1145/3569482","DOIUrl":"https://doi.org/10.1145/3569482","url":null,"abstract":"Spontaneous selection of IoT devices from the head-mounted device is key for user-centered pervasive interaction. BLEselect enables users to select an unmodified Bluetooth 5.1 compatible IoT device by nodding at, pointing at, or drawing a circle in the air around it. We designed a compact antenna array that fits on a pair of smart glasses to estimate the Angle of Arrival (AoA) of IoT and wrist-worn devices’ advertising signals. We then developed a sensing pipeline that supports all three selection gestures with lightweight machine learning models, which are trained in real-time for both hand gestures. Extensive characterizations and evaluations show that our system is accurate, natural, low-power, and privacy-preserving. Despite the small effective size of the antenna array, our system achieves a higher than 90% selection accuracy within a 3 meters distance in front of the user. In a user study that mimics real-life usage cases, the overall selection accuracy is 96.7% for a diverse set of 22 participants in terms of age, technology savviness, and body structures.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"280 1","pages":"198:1-198:28"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80136760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
WristAcoustic: Through-Wrist Acoustic Response Based Authentication for Smartwatches 腕带声学:智能手表的基于腕带声学响应的认证
Pub Date : 2022-01-01 DOI: 10.1145/3569473
J. Huh, Hyejin Shin, Hongmin Kim, Eunyong Cheon, Young-sok Song, Choong-Hoon Lee, Ian Oakley
{"title":"WristAcoustic: Through-Wrist Acoustic Response Based Authentication for Smartwatches","authors":"J. Huh, Hyejin Shin, Hongmin Kim, Eunyong Cheon, Young-sok Song, Choong-Hoon Lee, Ian Oakley","doi":"10.1145/3569473","DOIUrl":"https://doi.org/10.1145/3569473","url":null,"abstract":"","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"6 1","pages":"167:1-167:34"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90000381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LiSee: A Headphone that Provides All-day Assistance for Blind and Low-vision Users to Reach Surrounding Objects LiSee:一款全天帮助盲人和低视力用户接触周围物体的耳机
Pub Date : 2022-01-01 DOI: 10.1145/3550282
Kaixin Chen, Yongzhi Huang, Yicong Chen, Haobin Zhong, Lihua Lin, Lu Wang, Kaishun Wu
Reaching surrounding target objects is difficult for blind and low-vision (BLV) users, affecting their daily life. Based on interviews and exchanges, we propose an unobtrusive wearable system called LiSee to provide BLV users with all-day assistance. Following a user-centered design method, we carefully designed the LiSee prototype, which integrates various electronic components and is disguised as a neckband headphone such that it is an extension of the existing headphone. The top-level software includes a series of seamless image processing algorithms to solve the challenges brought by the unconstrained wearable form so as to ensure excellent real-time performance. Moreover, users are provided with a personalized guidance scheme so that they can use LiSee quickly based on their personal expertise. Finally, a system evaluation and a user study were completed in the laboratory and participants’ homes. The results show that LiSee works robustly, indicating that it can meet the daily needs of most participants to reach surrounding objects.
盲人和低视力(BLV)用户很难到达周围的目标物体,影响了他们的日常生活。通过采访和交流,我们提出了一个不起眼的可穿戴系统LiSee,为BLV用户提供全天候的帮助。遵循以用户为中心的设计方法,我们精心设计了LiSee原型,它集成了各种电子元件,伪装成领口耳机,是现有耳机的延伸。顶层软件包含一系列无缝图像处理算法,解决不受约束的可穿戴形式带来的挑战,确保卓越的实时性。此外,还为用户提供了个性化的指导方案,使用户能够根据自己的专业知识快速使用LiSee。最后,在实验室和参与者家中完成了系统评估和用户研究。结果表明,LiSee工作稳健,表明它可以满足大多数参与者接触周围物体的日常需求。
{"title":"LiSee: A Headphone that Provides All-day Assistance for Blind and Low-vision Users to Reach Surrounding Objects","authors":"Kaixin Chen, Yongzhi Huang, Yicong Chen, Haobin Zhong, Lihua Lin, Lu Wang, Kaishun Wu","doi":"10.1145/3550282","DOIUrl":"https://doi.org/10.1145/3550282","url":null,"abstract":"Reaching surrounding target objects is difficult for blind and low-vision (BLV) users, affecting their daily life. Based on interviews and exchanges, we propose an unobtrusive wearable system called LiSee to provide BLV users with all-day assistance. Following a user-centered design method, we carefully designed the LiSee prototype, which integrates various electronic components and is disguised as a neckband headphone such that it is an extension of the existing headphone. The top-level software includes a series of seamless image processing algorithms to solve the challenges brought by the unconstrained wearable form so as to ensure excellent real-time performance. Moreover, users are provided with a personalized guidance scheme so that they can use LiSee quickly based on their personal expertise. Finally, a system evaluation and a user study were completed in the laboratory and participants’ homes. The results show that LiSee works robustly, indicating that it can meet the daily needs of most participants to reach surrounding objects.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"17 1","pages":"104:1-104:30"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82675803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1