首页 > 最新文献

Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.最新文献

英文 中文
PackquID: In-packet Liquid Identification Using RF Signals PackquID:包内液体识别使用射频信号
Pub Date : 2022-01-01 DOI: 10.1145/3569469
Fei Shang, Panlong Yang, Yubo Yan, Xiangyang Li
There are many scenarios where the liquid is occluded by other items ( e.g
在许多情况下,液体被其他物品(例如液体)堵塞
{"title":"PackquID: In-packet Liquid Identification Using RF Signals","authors":"Fei Shang, Panlong Yang, Yubo Yan, Xiangyang Li","doi":"10.1145/3569469","DOIUrl":"https://doi.org/10.1145/3569469","url":null,"abstract":"There are many scenarios where the liquid is occluded by other items ( e.g","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"21 1","pages":"181:1-181:27"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73214010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SpARklingPaper: Enhancing Common Pen- And Paper-Based Handwriting Training for Children by Digitally Augmenting Papers Using a Tablet Screen SpARklingPaper:通过使用平板电脑屏幕数字增强纸张,增强儿童的普通笔和纸手写训练
Pub Date : 2022-01-01 DOI: 10.1145/3550337
T. Drey, Jessica Janek, Josef Lang, Dietmar Puschmann, Michael Rietzler, E. Rukzio
Educational apps support learning, but handwriting training is still based on analog pen- and paper. However, training handwriting with apps can negatively affect graphomotor handwriting skills due to the different haptic feedback of the tablet, stylus, or finger compared to pen and paper. With SpARklingPaper, we are the first to combine the genuine haptic feedback of analog pen and paper with the digital support of apps. Our artifact contribution enables children to write with any pen on a standard paper placed on a tablet’s screen, augmenting the paper from below, showing animated letters and individual feedback. We conducted two online surveys with overall 29 parents and teachers of elementary school pupils and a user study with 13 children and 13 parents for evaluation. Our results show the importance of the genuine analog haptic feedback combined with the augmentation of SpARklingPaper. It was rated superior compared to our stylus baseline condition regarding pen-handling, writing training-success, motivation, and overall impression. SpARklingPaper can be a blueprint for high-fidelity haptic feedback handwriting training systems.
教育应用程序支持学习,但手写训练仍然基于模拟笔和纸。然而,与笔和纸相比,平板电脑、触控笔或手指的触觉反馈不同,因此用应用程序训练手写可能会对手写运动技能产生负面影响。通过SpARklingPaper,我们率先将模拟笔和纸的真实触觉反馈与应用程序的数字支持结合起来。我们的人工制品使孩子们能够用任何一支笔在平板电脑屏幕上的标准纸上写字,从下面放大纸,显示动画字母和个人反馈。我们对29名小学生家长和老师进行了两次在线调查,并对13名儿童和13名家长进行了用户研究以进行评估。我们的结果显示了真实的模拟触觉反馈与增强的SpARklingPaper的重要性。与我们的触控笔基线条件相比,它在笔操作、写作训练成功、动机和整体印象方面被评为优越。SpARklingPaper可以成为高保真触觉反馈手写训练系统的蓝图。
{"title":"SpARklingPaper: Enhancing Common Pen- And Paper-Based Handwriting Training for Children by Digitally Augmenting Papers Using a Tablet Screen","authors":"T. Drey, Jessica Janek, Josef Lang, Dietmar Puschmann, Michael Rietzler, E. Rukzio","doi":"10.1145/3550337","DOIUrl":"https://doi.org/10.1145/3550337","url":null,"abstract":"Educational apps support learning, but handwriting training is still based on analog pen- and paper. However, training handwriting with apps can negatively affect graphomotor handwriting skills due to the different haptic feedback of the tablet, stylus, or finger compared to pen and paper. With SpARklingPaper, we are the first to combine the genuine haptic feedback of analog pen and paper with the digital support of apps. Our artifact contribution enables children to write with any pen on a standard paper placed on a tablet’s screen, augmenting the paper from below, showing animated letters and individual feedback. We conducted two online surveys with overall 29 parents and teachers of elementary school pupils and a user study with 13 children and 13 parents for evaluation. Our results show the importance of the genuine analog haptic feedback combined with the augmentation of SpARklingPaper. It was rated superior compared to our stylus baseline condition regarding pen-handling, writing training-success, motivation, and overall impression. SpARklingPaper can be a blueprint for high-fidelity haptic feedback handwriting training systems.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"102 1","pages":"113:1-113:29"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73864905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DeXAR: Deep Explainable Sensor-Based Activity Recognition in Smart-Home Environments 智能家居环境中基于深度可解释传感器的活动识别
Pub Date : 2022-01-01 DOI: 10.1145/3517224
Luca Arrotta, Gabriele Civitarese, C. Bettini
The sensor-based recognition of Activities of Daily Living (ADLs) in smart-home environments is an active research area, with relevant applications in healthcare and ambient assisted living. The application of Explainable Artificial Intelligence (XAI) to ADLs recognition has the potential of making this process trusted, transparent and understandable. The few works that investigated this problem considered only interpretable machine learning models. In this work, we propose DeXAR, a novel methodology to transform sensor data into semantic images to take advantage of XAI methods based on Convolutional Neural Networks (CNN). We apply different XAI approaches for deep learning and, from the resulting heat maps, we generate explanations in natural language. In order to identify the most effective XAI method, we performed extensive experiments on two different datasets, with both a common-knowledge and a user-based evaluation. The results of a user study show that the white-box XAI method based on prototypes is the most effective.
智能家居环境中基于传感器的日常生活活动(ADLs)识别是一个活跃的研究领域,在医疗保健和环境辅助生活中有着相关的应用。可解释人工智能(XAI)在adl识别中的应用有可能使这一过程可信、透明和可理解。研究这个问题的少数作品只考虑了可解释的机器学习模型。在这项工作中,我们提出了DeXAR,一种利用基于卷积神经网络(CNN)的XAI方法将传感器数据转换为语义图像的新方法。我们将不同的XAI方法应用于深度学习,并从产生的热图中生成自然语言的解释。为了确定最有效的XAI方法,我们在两个不同的数据集上进行了广泛的实验,包括常识和基于用户的评估。用户研究结果表明,基于原型的白盒XAI方法是最有效的。
{"title":"DeXAR: Deep Explainable Sensor-Based Activity Recognition in Smart-Home Environments","authors":"Luca Arrotta, Gabriele Civitarese, C. Bettini","doi":"10.1145/3517224","DOIUrl":"https://doi.org/10.1145/3517224","url":null,"abstract":"The sensor-based recognition of Activities of Daily Living (ADLs) in smart-home environments is an active research area, with relevant applications in healthcare and ambient assisted living. The application of Explainable Artificial Intelligence (XAI) to ADLs recognition has the potential of making this process trusted, transparent and understandable. The few works that investigated this problem considered only interpretable machine learning models. In this work, we propose DeXAR, a novel methodology to transform sensor data into semantic images to take advantage of XAI methods based on Convolutional Neural Networks (CNN). We apply different XAI approaches for deep learning and, from the resulting heat maps, we generate explanations in natural language. In order to identify the most effective XAI method, we performed extensive experiments on two different datasets, with both a common-knowledge and a user-based evaluation. The results of a user study show that the white-box XAI method based on prototypes is the most effective.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"14 1","pages":"1:1-1:30"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81979148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
HearFire: Indoor Fire Detection via Inaudible Acoustic Sensing HearFire:通过听不见的声音感应进行室内火灾探测
Pub Date : 2022-01-01 DOI: 10.1145/3569500
Z. Wang
Indoor conflagration causes a large number of casualties and property losses worldwide every year. Yet existing indoor fire detection systems either suffer from short sensing range (e.g., ≤ 0.5m using a thermometer), susceptible to interferences (e.g., smoke detector) or high computational and deployment overhead (e.g., cameras, Wi-Fi). This paper proposes HearFire, a cost-effective, easy-to-use and timely room-scale fire detection system via acoustic sensing. HearFire consists of a collocated commodity speaker and microphone pair, which remotely senses fire by emitting inaudible sound waves. Unlike existing works that use signal reflection effect to fulfill acoustic sensing tasks, HearFire leverages sound absorption and sound speed variations to sense the fire due to unique physical properties of flame. Through a deep analysis of sound transmission, HearFire effectively achieves room-scale sensing by correlating the relationship between the transmission signal length and sensing distance. The transmission frame is carefully selected to expand sensing range and balance a series of practical factors that impact the system’s performance. We further design a simple yet effective approach to remove the environmental interference caused by signal reflection by conducting a deep investigation into channel differences between sound reflection and sound absorption. Specifically, sound reflection results in a much more stable pattern in terms of signal energy than sound absorption, which can be exploited to differentiate the channel measurements caused by fire from other interferences. Extensive experiments demonstrate that HireFire enables a maximum 7m sensing range and achieves timely fire detection in indoor environments with up to 99 . 2% accuracy under different experiment configurations.
室内火灾每年在世界范围内造成大量人员伤亡和财产损失。然而,现有的室内火灾探测系统要么传感距离短(例如,使用温度计≤0.5m),易受干扰(例如,烟雾探测器),要么计算和部署开销高(例如,摄像头,Wi-Fi)。本文提出了一种具有成本效益、易于使用和及时的室内火灾探测系统——HearFire。HearFire由一对商用扬声器和麦克风组合而成,通过发射听不见的声波来远程感知火灾。与现有的利用信号反射效应来完成声传感任务不同,由于火焰独特的物理性质,HearFire利用吸声和声速变化来感知火焰。通过对声音传输的深入分析,HearFire通过将传输信号长度与传感距离之间的关系进行关联,有效地实现了房间尺度的传感。传输框架经过精心选择,以扩大传感范围,并平衡一系列影响系统性能的实际因素。我们进一步设计了一种简单而有效的方法,通过深入研究声反射和声吸收之间的通道差异来消除信号反射引起的环境干扰。具体来说,声反射在信号能量方面比声吸收产生更稳定的模式,这可以用来区分由火灾引起的通道测量和其他干扰。大量的实验表明,HireFire能够实现最大7m的传感距离,并在室内环境中实现高达99的及时火灾探测。不同实验配置下精度为2%。
{"title":"HearFire: Indoor Fire Detection via Inaudible Acoustic Sensing","authors":"Z. Wang","doi":"10.1145/3569500","DOIUrl":"https://doi.org/10.1145/3569500","url":null,"abstract":"Indoor conflagration causes a large number of casualties and property losses worldwide every year. Yet existing indoor fire detection systems either suffer from short sensing range (e.g., ≤ 0.5m using a thermometer), susceptible to interferences (e.g., smoke detector) or high computational and deployment overhead (e.g., cameras, Wi-Fi). This paper proposes HearFire, a cost-effective, easy-to-use and timely room-scale fire detection system via acoustic sensing. HearFire consists of a collocated commodity speaker and microphone pair, which remotely senses fire by emitting inaudible sound waves. Unlike existing works that use signal reflection effect to fulfill acoustic sensing tasks, HearFire leverages sound absorption and sound speed variations to sense the fire due to unique physical properties of flame. Through a deep analysis of sound transmission, HearFire effectively achieves room-scale sensing by correlating the relationship between the transmission signal length and sensing distance. The transmission frame is carefully selected to expand sensing range and balance a series of practical factors that impact the system’s performance. We further design a simple yet effective approach to remove the environmental interference caused by signal reflection by conducting a deep investigation into channel differences between sound reflection and sound absorption. Specifically, sound reflection results in a much more stable pattern in terms of signal energy than sound absorption, which can be exploited to differentiate the channel measurements caused by fire from other interferences. Extensive experiments demonstrate that HireFire enables a maximum 7m sensing range and achieves timely fire detection in indoor environments with up to 99 . 2% accuracy under different experiment configurations.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"28 1","pages":"185:1-185:25"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78973624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
LiSee: A Headphone that Provides All-day Assistance for Blind and Low-vision Users to Reach Surrounding Objects LiSee:一款全天帮助盲人和低视力用户接触周围物体的耳机
Pub Date : 2022-01-01 DOI: 10.1145/3550282
Kaixin Chen, Yongzhi Huang, Yicong Chen, Haobin Zhong, Lihua Lin, Lu Wang, Kaishun Wu
Reaching surrounding target objects is difficult for blind and low-vision (BLV) users, affecting their daily life. Based on interviews and exchanges, we propose an unobtrusive wearable system called LiSee to provide BLV users with all-day assistance. Following a user-centered design method, we carefully designed the LiSee prototype, which integrates various electronic components and is disguised as a neckband headphone such that it is an extension of the existing headphone. The top-level software includes a series of seamless image processing algorithms to solve the challenges brought by the unconstrained wearable form so as to ensure excellent real-time performance. Moreover, users are provided with a personalized guidance scheme so that they can use LiSee quickly based on their personal expertise. Finally, a system evaluation and a user study were completed in the laboratory and participants’ homes. The results show that LiSee works robustly, indicating that it can meet the daily needs of most participants to reach surrounding objects.
盲人和低视力(BLV)用户很难到达周围的目标物体,影响了他们的日常生活。通过采访和交流,我们提出了一个不起眼的可穿戴系统LiSee,为BLV用户提供全天候的帮助。遵循以用户为中心的设计方法,我们精心设计了LiSee原型,它集成了各种电子元件,伪装成领口耳机,是现有耳机的延伸。顶层软件包含一系列无缝图像处理算法,解决不受约束的可穿戴形式带来的挑战,确保卓越的实时性。此外,还为用户提供了个性化的指导方案,使用户能够根据自己的专业知识快速使用LiSee。最后,在实验室和参与者家中完成了系统评估和用户研究。结果表明,LiSee工作稳健,表明它可以满足大多数参与者接触周围物体的日常需求。
{"title":"LiSee: A Headphone that Provides All-day Assistance for Blind and Low-vision Users to Reach Surrounding Objects","authors":"Kaixin Chen, Yongzhi Huang, Yicong Chen, Haobin Zhong, Lihua Lin, Lu Wang, Kaishun Wu","doi":"10.1145/3550282","DOIUrl":"https://doi.org/10.1145/3550282","url":null,"abstract":"Reaching surrounding target objects is difficult for blind and low-vision (BLV) users, affecting their daily life. Based on interviews and exchanges, we propose an unobtrusive wearable system called LiSee to provide BLV users with all-day assistance. Following a user-centered design method, we carefully designed the LiSee prototype, which integrates various electronic components and is disguised as a neckband headphone such that it is an extension of the existing headphone. The top-level software includes a series of seamless image processing algorithms to solve the challenges brought by the unconstrained wearable form so as to ensure excellent real-time performance. Moreover, users are provided with a personalized guidance scheme so that they can use LiSee quickly based on their personal expertise. Finally, a system evaluation and a user study were completed in the laboratory and participants’ homes. The results show that LiSee works robustly, indicating that it can meet the daily needs of most participants to reach surrounding objects.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"17 1","pages":"104:1-104:30"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82675803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
WristAcoustic: Through-Wrist Acoustic Response Based Authentication for Smartwatches 腕带声学:智能手表的基于腕带声学响应的认证
Pub Date : 2022-01-01 DOI: 10.1145/3569473
J. Huh, Hyejin Shin, Hongmin Kim, Eunyong Cheon, Young-sok Song, Choong-Hoon Lee, Ian Oakley
{"title":"WristAcoustic: Through-Wrist Acoustic Response Based Authentication for Smartwatches","authors":"J. Huh, Hyejin Shin, Hongmin Kim, Eunyong Cheon, Young-sok Song, Choong-Hoon Lee, Ian Oakley","doi":"10.1145/3569473","DOIUrl":"https://doi.org/10.1145/3569473","url":null,"abstract":"","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"6 1","pages":"167:1-167:34"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90000381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SSpoon: A Shape-changing Spoon That Optimizes Bite Size for Eating Rate Regulation SSpoon:一种可改变形状的勺子,可以优化咀嚼大小,调节进食速度
Pub Date : 2022-01-01 DOI: 10.1145/3550312
Yang Chen, Katherine Fennedy, A. Fogel, Shengdong Zhao, Chaoyang Zhang, Lijuan Liu, C. Yen
One key strategy of combating obesity is to slow down eating; however, this is difficult to achieve due to people’s habitual nature. In this paper, we explored the feasibility of incorporating shape-changing interface into an eating spoon to directly intervene in undesirable eating behaviour. First, we investigated the optimal dimension (i.e., Z-depth) and ideal range of spoon transformation for different food forms that could affect bite size while maintaining usability. Those findings allowed the development of SSpoon prototype through a series of design explorations that are optimised for user’s adoption. Then, we applied two shape-changing strategies: instant transformations based on food forms and subtle transformations based on food intake) and examined in two comparative studies involving a full course meal using Wizard-of-Oz approach. The results indicated that SSpoon could achieve comparable effects to a small spoon (5ml) in reducing eating rate by 13.7-16.1% and food consumption by 4.4-4.6%, while retaining similar user satisfaction as a normal eating spoon (10ml). These results demonstrate the feasibility of a shape-changing eating utensil as a promising alternative to combat the growing concern of obesity. . These provide initial to RQ4 , suggesting that SSpoon may not influence the perceived despite the overall of in a standardized
对抗肥胖的一个关键策略是放慢进食速度;然而,由于人们的习惯性,这很难做到。在本文中,我们探讨了将形状改变界面整合到进食勺中以直接干预不良进食行为的可行性。首先,我们研究了不同食物形态的最佳尺寸(即Z-depth)和勺子变换的理想范围,这可能会影响咬大小,同时保持可用性。这些发现允许SSpoon原型的开发,通过一系列的设计探索,优化用户采用。然后,我们应用了两种形状变化策略:基于食物形式的即时转换和基于食物摄入量的微妙转换),并在两个比较研究中使用《绿野仙踪》的方法进行了检查。结果表明,SSpoon可以达到与小勺子(5ml)相当的效果,可以减少13.7-16.1%的进食率和4.4-4.6%的食物消耗,同时保持与普通进食勺子(10ml)相似的用户满意度。这些结果证明了一种可以改变形状的餐具作为对抗日益增长的肥胖问题的有希望的替代品的可行性。这些提供了RQ4的初始值,表明SSpoon可能不会影响感知,尽管总体上是标准化的
{"title":"SSpoon: A Shape-changing Spoon That Optimizes Bite Size for Eating Rate Regulation","authors":"Yang Chen, Katherine Fennedy, A. Fogel, Shengdong Zhao, Chaoyang Zhang, Lijuan Liu, C. Yen","doi":"10.1145/3550312","DOIUrl":"https://doi.org/10.1145/3550312","url":null,"abstract":"One key strategy of combating obesity is to slow down eating; however, this is difficult to achieve due to people’s habitual nature. In this paper, we explored the feasibility of incorporating shape-changing interface into an eating spoon to directly intervene in undesirable eating behaviour. First, we investigated the optimal dimension (i.e., Z-depth) and ideal range of spoon transformation for different food forms that could affect bite size while maintaining usability. Those findings allowed the development of SSpoon prototype through a series of design explorations that are optimised for user’s adoption. Then, we applied two shape-changing strategies: instant transformations based on food forms and subtle transformations based on food intake) and examined in two comparative studies involving a full course meal using Wizard-of-Oz approach. The results indicated that SSpoon could achieve comparable effects to a small spoon (5ml) in reducing eating rate by 13.7-16.1% and food consumption by 4.4-4.6%, while retaining similar user satisfaction as a normal eating spoon (10ml). These results demonstrate the feasibility of a shape-changing eating utensil as a promising alternative to combat the growing concern of obesity. . These provide initial to RQ4 , suggesting that SSpoon may not influence the perceived despite the overall of in a standardized","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"434 1","pages":"105:1-105:32"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79599477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
StretchAR: Exploiting Touch and Stretch as a Method of Interaction for Smart Glasses Using Wearable Straps StretchAR:利用触摸和拉伸作为使用可穿戴带的智能眼镜的交互方法
Pub Date : 2022-01-01 DOI: 10.1145/3550305
Luis Paredes, Ananya Ipsita, J. C. Mesa, Ramses V. Martinez Garrido, K. Ramani
presents StretchAR, wearable straps that exploit touch and stretch as input modalities to interact with the virtual content displayed on smart glasses. StretchAR straps are thin, lightweight, and can be attached to existing garments to enhance users’ interactions in AR. StretchAR straps can withstand strains up to 190% while remaining sensitive to touch inputs. The strap allows the effective combination of these inputs as a mode of interaction with the content displayed through AR widgets, maps, menus, social media, and Internet of Things (IoT) devices. Furthermore, we conducted a user study with 15 participants to determine the potential implications of the use of StretchAR as input modalities when placed on four different body locations (head, chest, forearm, and wrist). This study reveals that StretchAR can be used as an efficient and convenient input modality for smart glasses with a 96% accuracy. Additionally, we provide a collection of 28 interactions enabled by the simultaneous touch–stretch capabilities of StretchAR. Finally, we facilitate recommendation guidelines for the design, fabrication, placement, and possible applications of StretchAR as an interaction modality for AR content displayed on smart glasses. Exploiting as
推出了StretchAR,一种可穿戴带,利用触摸和拉伸作为输入方式,与智能眼镜上显示的虚拟内容进行交互。StretchAR肩带薄而轻,可以附着在现有的服装上,增强用户在AR中的互动。StretchAR肩带可以承受高达190%的张力,同时对触摸输入保持敏感。这条表带可以有效地结合这些输入,作为一种与AR小部件、地图、菜单、社交媒体和物联网(IoT)设备显示的内容交互的模式。此外,我们对15名参与者进行了一项用户研究,以确定将StretchAR作为输入方式放置在四个不同的身体部位(头部、胸部、前臂和手腕)时的潜在影响。这项研究表明,StretchAR可以作为一种高效方便的智能眼镜输入方式,准确率达到96%。此外,我们还提供了由StretchAR的同时触摸拉伸功能启用的28种交互的集合。最后,我们为StretchAR作为智能眼镜上显示AR内容的交互方式的设计、制造、放置和可能的应用提供了推荐指南。利用作为
{"title":"StretchAR: Exploiting Touch and Stretch as a Method of Interaction for Smart Glasses Using Wearable Straps","authors":"Luis Paredes, Ananya Ipsita, J. C. Mesa, Ramses V. Martinez Garrido, K. Ramani","doi":"10.1145/3550305","DOIUrl":"https://doi.org/10.1145/3550305","url":null,"abstract":"presents StretchAR, wearable straps that exploit touch and stretch as input modalities to interact with the virtual content displayed on smart glasses. StretchAR straps are thin, lightweight, and can be attached to existing garments to enhance users’ interactions in AR. StretchAR straps can withstand strains up to 190% while remaining sensitive to touch inputs. The strap allows the effective combination of these inputs as a mode of interaction with the content displayed through AR widgets, maps, menus, social media, and Internet of Things (IoT) devices. Furthermore, we conducted a user study with 15 participants to determine the potential implications of the use of StretchAR as input modalities when placed on four different body locations (head, chest, forearm, and wrist). This study reveals that StretchAR can be used as an efficient and convenient input modality for smart glasses with a 96% accuracy. Additionally, we provide a collection of 28 interactions enabled by the simultaneous touch–stretch capabilities of StretchAR. Finally, we facilitate recommendation guidelines for the design, fabrication, placement, and possible applications of StretchAR as an interaction modality for AR content displayed on smart glasses. Exploiting as","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"9 1","pages":"134:1-134:26"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76663092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
WearSign: Pushing the Limit of Sign Language Translation Using Inertial and EMG Wearables WearSign:利用惯性和肌电可穿戴设备推动手语翻译的极限
Pub Date : 2022-01-01 DOI: 10.1145/3517257
Qian Zhang, JiaZhen Jing, Dong Wang, Run Zhao
Sign language translation (SLT) is considered as the core technology to break the communication barrier between the deaf and hearing people. However, most studies only focus on recognizing the sequence of sign gestures (sign language recognition (SLR)), ignoring the significant difference of linguistic structures between sign language and spoken language. In this paper, we approach SLT as a spatio-temporal machine translation task and propose a wearable-based system, WearSign, to enable direct translation from the sign-induced sensory signals into spoken texts. WearSign leverages a smartwatch and an armband of ElectroMyoGraphy (EMG) sensors to capture the sophisticated sign gestures. In the design of the translation network, considering the significant modality and linguistic gap between sensory signals and spoken language, we design a multi-task encoder-decoder framework which uses sign glosses (sign gesture labels) for intermediate supervision to guide the end-to-end training. In addition, due to the lack of sufficient training data, the performance of prior studies usually degrades drastically when it comes to sentences with complex structures or unseen in the training set. To tackle this, we borrow the idea of back-translation and leverage the much more available spoken language data to synthesize the paired sign language data. We include the synthetic pairs into the training process, which enables the network to learn better sequence-to-sequence mapping as well as generate more fluent spoken language sentences. We construct an American sign language (ASL) dataset consisting of 250 commonly used sentences gathered from 15 volunteers. WearSign achieves 4.7% and 8.6% word error rate (WER) in user-independent tests and unseen sentence tests respectively. We also implement a real-time version of WearSign which runs fully on the smartphone with a low latency and energy overhead. CCS Concepts:
手语翻译(SLT)被认为是打破聋人与听人之间交流障碍的核心技术。然而,大多数研究只关注手势序列的识别(sign language recognition, SLR),而忽视了手语与口语之间语言结构的显著差异。在本文中,我们将语言翻译作为一种时空机器翻译任务,并提出了一种基于可穿戴设备的系统WearSign,以实现从符号诱导的感官信号到口语文本的直接翻译。WearSign利用智能手表和肌电图(EMG)传感器臂带来捕捉复杂的手势。在翻译网络的设计中,考虑到感官信号与口语之间存在显著的语态差异和语言差异,我们设计了一个多任务编码器-解码器框架,该框架使用手势符号作为中间监督,指导端到端训练。此外,由于缺乏足够的训练数据,当涉及到结构复杂的句子或未在训练集中看到的句子时,先前的研究的性能通常会急剧下降。为了解决这个问题,我们借用了反向翻译的思想,并利用更多可用的口语数据来合成成对的手语数据。我们将合成对纳入训练过程,这使得网络能够更好地学习序列到序列的映射,并生成更流利的口语句子。我们构建了一个由来自15名志愿者的250个常用句子组成的美国手语(ASL)数据集。在用户独立测试和未见句子测试中,WearSign分别实现了4.7%和8.6%的单词错误率(WER)。我们还实现了一个实时版本的WearSign,它完全运行在智能手机上,具有低延迟和低能耗。CCS的概念:
{"title":"WearSign: Pushing the Limit of Sign Language Translation Using Inertial and EMG Wearables","authors":"Qian Zhang, JiaZhen Jing, Dong Wang, Run Zhao","doi":"10.1145/3517257","DOIUrl":"https://doi.org/10.1145/3517257","url":null,"abstract":"Sign language translation (SLT) is considered as the core technology to break the communication barrier between the deaf and hearing people. However, most studies only focus on recognizing the sequence of sign gestures (sign language recognition (SLR)), ignoring the significant difference of linguistic structures between sign language and spoken language. In this paper, we approach SLT as a spatio-temporal machine translation task and propose a wearable-based system, WearSign, to enable direct translation from the sign-induced sensory signals into spoken texts. WearSign leverages a smartwatch and an armband of ElectroMyoGraphy (EMG) sensors to capture the sophisticated sign gestures. In the design of the translation network, considering the significant modality and linguistic gap between sensory signals and spoken language, we design a multi-task encoder-decoder framework which uses sign glosses (sign gesture labels) for intermediate supervision to guide the end-to-end training. In addition, due to the lack of sufficient training data, the performance of prior studies usually degrades drastically when it comes to sentences with complex structures or unseen in the training set. To tackle this, we borrow the idea of back-translation and leverage the much more available spoken language data to synthesize the paired sign language data. We include the synthetic pairs into the training process, which enables the network to learn better sequence-to-sequence mapping as well as generate more fluent spoken language sentences. We construct an American sign language (ASL) dataset consisting of 250 commonly used sentences gathered from 15 volunteers. WearSign achieves 4.7% and 8.6% word error rate (WER) in user-independent tests and unseen sentence tests respectively. We also implement a real-time version of WearSign which runs fully on the smartphone with a low latency and energy overhead. CCS Concepts:","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"22 1","pages":"35:1-35:27"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78004612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Shoes++: A Smart Detachable Sole for Social Foot-to-foot Interaction 鞋++:一种智能的可拆卸鞋底,用于社交脚对脚的互动
Pub Date : 2022-01-01 DOI: 10.1145/3534620
Zihan Yan, Jiayi Zhou, Wu Yufei, Guanhong Liu, Danli Luo, Zi Zhou, Mi Haipeng, Lingyun Sun, Xiang 'Anthony' Chen, Yang Zhang, Guanyun Wang
Feet are the foundation of our bodies that not only perform locomotion but also participate in intent and emotion expression. Thus, foot gestures are an intuitive and natural form of expression for interpersonal interaction. Recent studies have mostly introduced smart shoes as personal gadgets, while foot gestures used in multi-person foot interactions in social scenarios remain largely unexplored. We present Shoes++, which includes an inertial measurement unit (IMU)-mounted sole and an input vocabulary of social foot-to-foot gestures to support foot-based interaction. The gesture vocabulary is derived and condensed by a set of gestures elicited from a participatory design session with 12 users. We implement a machine learning model in Shoes++ which can recognize two-person and three-person social foot-to-foot gestures with 94.3% and 96.6% accuracies (N=18). In addition, the sole is designed to easily attach to and detach from various daily shoes to support comfortable social foot interaction without taking off the shoes. Based on users’ qualitative feedback, we also found that Shoes++ can support team collaboration and enhance emotion expression, thus making social interactions or interpersonal dynamics more engaging in an expanded design space. Additional Key and smart sole Shoes++: A Smart Detachable Sole for Social Foot-to-foot Interaction. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 2, (June 2022),
脚是我们身体的基础,不仅可以进行运动,还可以参与意图和情感表达。因此,在人际交往中,脚的手势是一种直观、自然的表达方式。最近的研究主要是将智能鞋作为个人小工具,而在社交场景中多人脚互动中使用的脚手势仍未得到充分研究。我们展示了鞋++,它包括一个安装在鞋底的惯性测量单元(IMU)和一个社交脚对脚手势的输入词汇表,以支持基于脚的交互。手势词汇是由12个用户参与设计会议中产生的一组手势派生和浓缩而成的。我们在鞋++中实现了一个机器学习模型,该模型可以识别两人和三人的社交脚对脚手势,准确率分别为94.3%和96.6% (N=18)。此外,鞋底的设计可以轻松地与各种日常鞋子连接和分离,以支持舒适的社交足部互动,而无需脱下鞋子。基于用户的定性反馈,我们还发现鞋++可以支持团队协作,增强情感表达,从而使社会互动或人际动态在扩大的设计空间中更具吸引力。额外的关键和智能鞋底鞋++:一个智能的可拆卸鞋底的社会脚对脚的互动。ACM交互过程。暴徒。可穿戴无处不在的技术,6,2,(2022年6月),
{"title":"Shoes++: A Smart Detachable Sole for Social Foot-to-foot Interaction","authors":"Zihan Yan, Jiayi Zhou, Wu Yufei, Guanhong Liu, Danli Luo, Zi Zhou, Mi Haipeng, Lingyun Sun, Xiang 'Anthony' Chen, Yang Zhang, Guanyun Wang","doi":"10.1145/3534620","DOIUrl":"https://doi.org/10.1145/3534620","url":null,"abstract":"Feet are the foundation of our bodies that not only perform locomotion but also participate in intent and emotion expression. Thus, foot gestures are an intuitive and natural form of expression for interpersonal interaction. Recent studies have mostly introduced smart shoes as personal gadgets, while foot gestures used in multi-person foot interactions in social scenarios remain largely unexplored. We present Shoes++, which includes an inertial measurement unit (IMU)-mounted sole and an input vocabulary of social foot-to-foot gestures to support foot-based interaction. The gesture vocabulary is derived and condensed by a set of gestures elicited from a participatory design session with 12 users. We implement a machine learning model in Shoes++ which can recognize two-person and three-person social foot-to-foot gestures with 94.3% and 96.6% accuracies (N=18). In addition, the sole is designed to easily attach to and detach from various daily shoes to support comfortable social foot interaction without taking off the shoes. Based on users’ qualitative feedback, we also found that Shoes++ can support team collaboration and enhance emotion expression, thus making social interactions or interpersonal dynamics more engaging in an expanded design space. Additional Key and smart sole Shoes++: A Smart Detachable Sole for Social Foot-to-foot Interaction. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 2, (June 2022),","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"8 1","pages":"85:1-85:29"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74092177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1