The plethora of sensors in our commodity devices provides a rich substrate for sensor-fused tracking. Yet, today's solutions are unable to deliver robust and high tracking accuracies across multiple agents in practical, everyday environments - a feature central to the future of immersive and collaborative applications. This can be attributed to the limited scope of diversity leveraged by these fusion solutions, preventing them from catering to the multiple dimensions of accuracy, robustness (diverse environmental conditions) and scalability (multiple agents) simultaneously. In this work, we take an important step towards this goal by introducing the notion of dual-layer diversity to the problem of sensor fusion in multi-agent tracking. We demonstrate that the fusion of complementary tracking modalities, - passive/relative (e.g. visual odometry) and active/absolute tracking (e.g.infrastructure-assisted RF localization) offer a key first layer of diversity that brings scalability while the second layer of diversity lies in the methodology of fusion, where we bring together the complementary strengths of algorithmic (for robustness) and data-driven (for accuracy) approaches. ROVAR is an embodiment of such a dual-layer diversity approach that intelligently attends to cross-modal information using algorithmic and data-driven techniques that jointly share the burden of accurately tracking multiple agents in the wild. Extensive evaluations reveal ROVAR'S multi-dimensional benefits in terms of tracking accuracy, scalability and robustness to enable practical multi-agent immersive applications in everyday environments.
{"title":"RoVaR: Robust Multi-agent Tracking through Dual-layer Diversity in Visual and RF Sensing","authors":"Mallesham Dasari","doi":"10.1145/3580854","DOIUrl":"https://doi.org/10.1145/3580854","url":null,"abstract":"The plethora of sensors in our commodity devices provides a rich substrate for sensor-fused tracking. Yet, today's solutions are unable to deliver robust and high tracking accuracies across multiple agents in practical, everyday environments - a feature central to the future of immersive and collaborative applications. This can be attributed to the limited scope of diversity leveraged by these fusion solutions, preventing them from catering to the multiple dimensions of accuracy, robustness (diverse environmental conditions) and scalability (multiple agents) simultaneously. In this work, we take an important step towards this goal by introducing the notion of dual-layer diversity to the problem of sensor fusion in multi-agent tracking. We demonstrate that the fusion of complementary tracking modalities, - passive/relative (e.g. visual odometry) and active/absolute tracking (e.g.infrastructure-assisted RF localization) offer a key first layer of diversity that brings scalability while the second layer of diversity lies in the methodology of fusion, where we bring together the complementary strengths of algorithmic (for robustness) and data-driven (for accuracy) approaches. ROVAR is an embodiment of such a dual-layer diversity approach that intelligently attends to cross-modal information using algorithmic and data-driven techniques that jointly share the burden of accurately tracking multiple agents in the wild. Extensive evaluations reveal ROVAR'S multi-dimensional benefits in terms of tracking accuracy, scalability and robustness to enable practical multi-agent immersive applications in everyday environments.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79937788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are many scenarios where the liquid is occluded by other items ( e.g
在许多情况下,液体被其他物品(例如液体)堵塞
{"title":"PackquID: In-packet Liquid Identification Using RF Signals","authors":"Fei Shang, Panlong Yang, Yubo Yan, Xiangyang Li","doi":"10.1145/3569469","DOIUrl":"https://doi.org/10.1145/3569469","url":null,"abstract":"There are many scenarios where the liquid is occluded by other items ( e.g","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73214010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Drey, Jessica Janek, Josef Lang, Dietmar Puschmann, Michael Rietzler, E. Rukzio
Educational apps support learning, but handwriting training is still based on analog pen- and paper. However, training handwriting with apps can negatively affect graphomotor handwriting skills due to the different haptic feedback of the tablet, stylus, or finger compared to pen and paper. With SpARklingPaper, we are the first to combine the genuine haptic feedback of analog pen and paper with the digital support of apps. Our artifact contribution enables children to write with any pen on a standard paper placed on a tablet’s screen, augmenting the paper from below, showing animated letters and individual feedback. We conducted two online surveys with overall 29 parents and teachers of elementary school pupils and a user study with 13 children and 13 parents for evaluation. Our results show the importance of the genuine analog haptic feedback combined with the augmentation of SpARklingPaper. It was rated superior compared to our stylus baseline condition regarding pen-handling, writing training-success, motivation, and overall impression. SpARklingPaper can be a blueprint for high-fidelity haptic feedback handwriting training systems.
{"title":"SpARklingPaper: Enhancing Common Pen- And Paper-Based Handwriting Training for Children by Digitally Augmenting Papers Using a Tablet Screen","authors":"T. Drey, Jessica Janek, Josef Lang, Dietmar Puschmann, Michael Rietzler, E. Rukzio","doi":"10.1145/3550337","DOIUrl":"https://doi.org/10.1145/3550337","url":null,"abstract":"Educational apps support learning, but handwriting training is still based on analog pen- and paper. However, training handwriting with apps can negatively affect graphomotor handwriting skills due to the different haptic feedback of the tablet, stylus, or finger compared to pen and paper. With SpARklingPaper, we are the first to combine the genuine haptic feedback of analog pen and paper with the digital support of apps. Our artifact contribution enables children to write with any pen on a standard paper placed on a tablet’s screen, augmenting the paper from below, showing animated letters and individual feedback. We conducted two online surveys with overall 29 parents and teachers of elementary school pupils and a user study with 13 children and 13 parents for evaluation. Our results show the importance of the genuine analog haptic feedback combined with the augmentation of SpARklingPaper. It was rated superior compared to our stylus baseline condition regarding pen-handling, writing training-success, motivation, and overall impression. SpARklingPaper can be a blueprint for high-fidelity haptic feedback handwriting training systems.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73864905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The sensor-based recognition of Activities of Daily Living (ADLs) in smart-home environments is an active research area, with relevant applications in healthcare and ambient assisted living. The application of Explainable Artificial Intelligence (XAI) to ADLs recognition has the potential of making this process trusted, transparent and understandable. The few works that investigated this problem considered only interpretable machine learning models. In this work, we propose DeXAR, a novel methodology to transform sensor data into semantic images to take advantage of XAI methods based on Convolutional Neural Networks (CNN). We apply different XAI approaches for deep learning and, from the resulting heat maps, we generate explanations in natural language. In order to identify the most effective XAI method, we performed extensive experiments on two different datasets, with both a common-knowledge and a user-based evaluation. The results of a user study show that the white-box XAI method based on prototypes is the most effective.
{"title":"DeXAR: Deep Explainable Sensor-Based Activity Recognition in Smart-Home Environments","authors":"Luca Arrotta, Gabriele Civitarese, C. Bettini","doi":"10.1145/3517224","DOIUrl":"https://doi.org/10.1145/3517224","url":null,"abstract":"The sensor-based recognition of Activities of Daily Living (ADLs) in smart-home environments is an active research area, with relevant applications in healthcare and ambient assisted living. The application of Explainable Artificial Intelligence (XAI) to ADLs recognition has the potential of making this process trusted, transparent and understandable. The few works that investigated this problem considered only interpretable machine learning models. In this work, we propose DeXAR, a novel methodology to transform sensor data into semantic images to take advantage of XAI methods based on Convolutional Neural Networks (CNN). We apply different XAI approaches for deep learning and, from the resulting heat maps, we generate explanations in natural language. In order to identify the most effective XAI method, we performed extensive experiments on two different datasets, with both a common-knowledge and a user-based evaluation. The results of a user study show that the white-box XAI method based on prototypes is the most effective.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81979148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Indoor conflagration causes a large number of casualties and property losses worldwide every year. Yet existing indoor fire detection systems either suffer from short sensing range (e.g., ≤ 0.5m using a thermometer), susceptible to interferences (e.g., smoke detector) or high computational and deployment overhead (e.g., cameras, Wi-Fi). This paper proposes HearFire, a cost-effective, easy-to-use and timely room-scale fire detection system via acoustic sensing. HearFire consists of a collocated commodity speaker and microphone pair, which remotely senses fire by emitting inaudible sound waves. Unlike existing works that use signal reflection effect to fulfill acoustic sensing tasks, HearFire leverages sound absorption and sound speed variations to sense the fire due to unique physical properties of flame. Through a deep analysis of sound transmission, HearFire effectively achieves room-scale sensing by correlating the relationship between the transmission signal length and sensing distance. The transmission frame is carefully selected to expand sensing range and balance a series of practical factors that impact the system’s performance. We further design a simple yet effective approach to remove the environmental interference caused by signal reflection by conducting a deep investigation into channel differences between sound reflection and sound absorption. Specifically, sound reflection results in a much more stable pattern in terms of signal energy than sound absorption, which can be exploited to differentiate the channel measurements caused by fire from other interferences. Extensive experiments demonstrate that HireFire enables a maximum 7m sensing range and achieves timely fire detection in indoor environments with up to 99 . 2% accuracy under different experiment configurations.
{"title":"HearFire: Indoor Fire Detection via Inaudible Acoustic Sensing","authors":"Z. Wang","doi":"10.1145/3569500","DOIUrl":"https://doi.org/10.1145/3569500","url":null,"abstract":"Indoor conflagration causes a large number of casualties and property losses worldwide every year. Yet existing indoor fire detection systems either suffer from short sensing range (e.g., ≤ 0.5m using a thermometer), susceptible to interferences (e.g., smoke detector) or high computational and deployment overhead (e.g., cameras, Wi-Fi). This paper proposes HearFire, a cost-effective, easy-to-use and timely room-scale fire detection system via acoustic sensing. HearFire consists of a collocated commodity speaker and microphone pair, which remotely senses fire by emitting inaudible sound waves. Unlike existing works that use signal reflection effect to fulfill acoustic sensing tasks, HearFire leverages sound absorption and sound speed variations to sense the fire due to unique physical properties of flame. Through a deep analysis of sound transmission, HearFire effectively achieves room-scale sensing by correlating the relationship between the transmission signal length and sensing distance. The transmission frame is carefully selected to expand sensing range and balance a series of practical factors that impact the system’s performance. We further design a simple yet effective approach to remove the environmental interference caused by signal reflection by conducting a deep investigation into channel differences between sound reflection and sound absorption. Specifically, sound reflection results in a much more stable pattern in terms of signal energy than sound absorption, which can be exploited to differentiate the channel measurements caused by fire from other interferences. Extensive experiments demonstrate that HireFire enables a maximum 7m sensing range and achieves timely fire detection in indoor environments with up to 99 . 2% accuracy under different experiment configurations.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78973624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reaching surrounding target objects is difficult for blind and low-vision (BLV) users, affecting their daily life. Based on interviews and exchanges, we propose an unobtrusive wearable system called LiSee to provide BLV users with all-day assistance. Following a user-centered design method, we carefully designed the LiSee prototype, which integrates various electronic components and is disguised as a neckband headphone such that it is an extension of the existing headphone. The top-level software includes a series of seamless image processing algorithms to solve the challenges brought by the unconstrained wearable form so as to ensure excellent real-time performance. Moreover, users are provided with a personalized guidance scheme so that they can use LiSee quickly based on their personal expertise. Finally, a system evaluation and a user study were completed in the laboratory and participants’ homes. The results show that LiSee works robustly, indicating that it can meet the daily needs of most participants to reach surrounding objects.
{"title":"LiSee: A Headphone that Provides All-day Assistance for Blind and Low-vision Users to Reach Surrounding Objects","authors":"Kaixin Chen, Yongzhi Huang, Yicong Chen, Haobin Zhong, Lihua Lin, Lu Wang, Kaishun Wu","doi":"10.1145/3550282","DOIUrl":"https://doi.org/10.1145/3550282","url":null,"abstract":"Reaching surrounding target objects is difficult for blind and low-vision (BLV) users, affecting their daily life. Based on interviews and exchanges, we propose an unobtrusive wearable system called LiSee to provide BLV users with all-day assistance. Following a user-centered design method, we carefully designed the LiSee prototype, which integrates various electronic components and is disguised as a neckband headphone such that it is an extension of the existing headphone. The top-level software includes a series of seamless image processing algorithms to solve the challenges brought by the unconstrained wearable form so as to ensure excellent real-time performance. Moreover, users are provided with a personalized guidance scheme so that they can use LiSee quickly based on their personal expertise. Finally, a system evaluation and a user study were completed in the laboratory and participants’ homes. The results show that LiSee works robustly, indicating that it can meet the daily needs of most participants to reach surrounding objects.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82675803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Chen, Katherine Fennedy, A. Fogel, Shengdong Zhao, Chaoyang Zhang, Lijuan Liu, C. Yen
One key strategy of combating obesity is to slow down eating; however, this is difficult to achieve due to people’s habitual nature. In this paper, we explored the feasibility of incorporating shape-changing interface into an eating spoon to directly intervene in undesirable eating behaviour. First, we investigated the optimal dimension (i.e., Z-depth) and ideal range of spoon transformation for different food forms that could affect bite size while maintaining usability. Those findings allowed the development of SSpoon prototype through a series of design explorations that are optimised for user’s adoption. Then, we applied two shape-changing strategies: instant transformations based on food forms and subtle transformations based on food intake) and examined in two comparative studies involving a full course meal using Wizard-of-Oz approach. The results indicated that SSpoon could achieve comparable effects to a small spoon (5ml) in reducing eating rate by 13.7-16.1% and food consumption by 4.4-4.6%, while retaining similar user satisfaction as a normal eating spoon (10ml). These results demonstrate the feasibility of a shape-changing eating utensil as a promising alternative to combat the growing concern of obesity. . These provide initial to RQ4 , suggesting that SSpoon may not influence the perceived despite the overall of in a standardized
{"title":"SSpoon: A Shape-changing Spoon That Optimizes Bite Size for Eating Rate Regulation","authors":"Yang Chen, Katherine Fennedy, A. Fogel, Shengdong Zhao, Chaoyang Zhang, Lijuan Liu, C. Yen","doi":"10.1145/3550312","DOIUrl":"https://doi.org/10.1145/3550312","url":null,"abstract":"One key strategy of combating obesity is to slow down eating; however, this is difficult to achieve due to people’s habitual nature. In this paper, we explored the feasibility of incorporating shape-changing interface into an eating spoon to directly intervene in undesirable eating behaviour. First, we investigated the optimal dimension (i.e., Z-depth) and ideal range of spoon transformation for different food forms that could affect bite size while maintaining usability. Those findings allowed the development of SSpoon prototype through a series of design explorations that are optimised for user’s adoption. Then, we applied two shape-changing strategies: instant transformations based on food forms and subtle transformations based on food intake) and examined in two comparative studies involving a full course meal using Wizard-of-Oz approach. The results indicated that SSpoon could achieve comparable effects to a small spoon (5ml) in reducing eating rate by 13.7-16.1% and food consumption by 4.4-4.6%, while retaining similar user satisfaction as a normal eating spoon (10ml). These results demonstrate the feasibility of a shape-changing eating utensil as a promising alternative to combat the growing concern of obesity. . These provide initial to RQ4 , suggesting that SSpoon may not influence the perceived despite the overall of in a standardized","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79599477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luis Paredes, Ananya Ipsita, J. C. Mesa, Ramses V. Martinez Garrido, K. Ramani
presents StretchAR, wearable straps that exploit touch and stretch as input modalities to interact with the virtual content displayed on smart glasses. StretchAR straps are thin, lightweight, and can be attached to existing garments to enhance users’ interactions in AR. StretchAR straps can withstand strains up to 190% while remaining sensitive to touch inputs. The strap allows the effective combination of these inputs as a mode of interaction with the content displayed through AR widgets, maps, menus, social media, and Internet of Things (IoT) devices. Furthermore, we conducted a user study with 15 participants to determine the potential implications of the use of StretchAR as input modalities when placed on four different body locations (head, chest, forearm, and wrist). This study reveals that StretchAR can be used as an efficient and convenient input modality for smart glasses with a 96% accuracy. Additionally, we provide a collection of 28 interactions enabled by the simultaneous touch–stretch capabilities of StretchAR. Finally, we facilitate recommendation guidelines for the design, fabrication, placement, and possible applications of StretchAR as an interaction modality for AR content displayed on smart glasses. Exploiting as
{"title":"StretchAR: Exploiting Touch and Stretch as a Method of Interaction for Smart Glasses Using Wearable Straps","authors":"Luis Paredes, Ananya Ipsita, J. C. Mesa, Ramses V. Martinez Garrido, K. Ramani","doi":"10.1145/3550305","DOIUrl":"https://doi.org/10.1145/3550305","url":null,"abstract":"presents StretchAR, wearable straps that exploit touch and stretch as input modalities to interact with the virtual content displayed on smart glasses. StretchAR straps are thin, lightweight, and can be attached to existing garments to enhance users’ interactions in AR. StretchAR straps can withstand strains up to 190% while remaining sensitive to touch inputs. The strap allows the effective combination of these inputs as a mode of interaction with the content displayed through AR widgets, maps, menus, social media, and Internet of Things (IoT) devices. Furthermore, we conducted a user study with 15 participants to determine the potential implications of the use of StretchAR as input modalities when placed on four different body locations (head, chest, forearm, and wrist). This study reveals that StretchAR can be used as an efficient and convenient input modality for smart glasses with a 96% accuracy. Additionally, we provide a collection of 28 interactions enabled by the simultaneous touch–stretch capabilities of StretchAR. Finally, we facilitate recommendation guidelines for the design, fabrication, placement, and possible applications of StretchAR as an interaction modality for AR content displayed on smart glasses. Exploiting as","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76663092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sign language translation (SLT) is considered as the core technology to break the communication barrier between the deaf and hearing people. However, most studies only focus on recognizing the sequence of sign gestures (sign language recognition (SLR)), ignoring the significant difference of linguistic structures between sign language and spoken language. In this paper, we approach SLT as a spatio-temporal machine translation task and propose a wearable-based system, WearSign, to enable direct translation from the sign-induced sensory signals into spoken texts. WearSign leverages a smartwatch and an armband of ElectroMyoGraphy (EMG) sensors to capture the sophisticated sign gestures. In the design of the translation network, considering the significant modality and linguistic gap between sensory signals and spoken language, we design a multi-task encoder-decoder framework which uses sign glosses (sign gesture labels) for intermediate supervision to guide the end-to-end training. In addition, due to the lack of sufficient training data, the performance of prior studies usually degrades drastically when it comes to sentences with complex structures or unseen in the training set. To tackle this, we borrow the idea of back-translation and leverage the much more available spoken language data to synthesize the paired sign language data. We include the synthetic pairs into the training process, which enables the network to learn better sequence-to-sequence mapping as well as generate more fluent spoken language sentences. We construct an American sign language (ASL) dataset consisting of 250 commonly used sentences gathered from 15 volunteers. WearSign achieves 4.7% and 8.6% word error rate (WER) in user-independent tests and unseen sentence tests respectively. We also implement a real-time version of WearSign which runs fully on the smartphone with a low latency and energy overhead. CCS Concepts:
手语翻译(SLT)被认为是打破聋人与听人之间交流障碍的核心技术。然而,大多数研究只关注手势序列的识别(sign language recognition, SLR),而忽视了手语与口语之间语言结构的显著差异。在本文中,我们将语言翻译作为一种时空机器翻译任务,并提出了一种基于可穿戴设备的系统WearSign,以实现从符号诱导的感官信号到口语文本的直接翻译。WearSign利用智能手表和肌电图(EMG)传感器臂带来捕捉复杂的手势。在翻译网络的设计中,考虑到感官信号与口语之间存在显著的语态差异和语言差异,我们设计了一个多任务编码器-解码器框架,该框架使用手势符号作为中间监督,指导端到端训练。此外,由于缺乏足够的训练数据,当涉及到结构复杂的句子或未在训练集中看到的句子时,先前的研究的性能通常会急剧下降。为了解决这个问题,我们借用了反向翻译的思想,并利用更多可用的口语数据来合成成对的手语数据。我们将合成对纳入训练过程,这使得网络能够更好地学习序列到序列的映射,并生成更流利的口语句子。我们构建了一个由来自15名志愿者的250个常用句子组成的美国手语(ASL)数据集。在用户独立测试和未见句子测试中,WearSign分别实现了4.7%和8.6%的单词错误率(WER)。我们还实现了一个实时版本的WearSign,它完全运行在智能手机上,具有低延迟和低能耗。CCS的概念:
{"title":"WearSign: Pushing the Limit of Sign Language Translation Using Inertial and EMG Wearables","authors":"Qian Zhang, JiaZhen Jing, Dong Wang, Run Zhao","doi":"10.1145/3517257","DOIUrl":"https://doi.org/10.1145/3517257","url":null,"abstract":"Sign language translation (SLT) is considered as the core technology to break the communication barrier between the deaf and hearing people. However, most studies only focus on recognizing the sequence of sign gestures (sign language recognition (SLR)), ignoring the significant difference of linguistic structures between sign language and spoken language. In this paper, we approach SLT as a spatio-temporal machine translation task and propose a wearable-based system, WearSign, to enable direct translation from the sign-induced sensory signals into spoken texts. WearSign leverages a smartwatch and an armband of ElectroMyoGraphy (EMG) sensors to capture the sophisticated sign gestures. In the design of the translation network, considering the significant modality and linguistic gap between sensory signals and spoken language, we design a multi-task encoder-decoder framework which uses sign glosses (sign gesture labels) for intermediate supervision to guide the end-to-end training. In addition, due to the lack of sufficient training data, the performance of prior studies usually degrades drastically when it comes to sentences with complex structures or unseen in the training set. To tackle this, we borrow the idea of back-translation and leverage the much more available spoken language data to synthesize the paired sign language data. We include the synthetic pairs into the training process, which enables the network to learn better sequence-to-sequence mapping as well as generate more fluent spoken language sentences. We construct an American sign language (ASL) dataset consisting of 250 commonly used sentences gathered from 15 volunteers. WearSign achieves 4.7% and 8.6% word error rate (WER) in user-independent tests and unseen sentence tests respectively. We also implement a real-time version of WearSign which runs fully on the smartphone with a low latency and energy overhead. CCS Concepts:","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78004612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}