Fig. 1. BMAR is a novel method to synchronize and align signals across devices without the need for specific user input, action, or explicit synchronization through wired or wireless communication (e.g., WiFi or BLE). BMAR is capable of synchronizing (a) independently recorded signals after the fact by (b) first pre-aligning recordings using air pressure as an inexpensive sensing modality that simultaneously allows us to reject non-simultaneous recordings. (c) In a second step, BMAR produces a refined signal alignment across sensor devices by cross-correlating accelerometer observations.
{"title":"BMAR: Barometric and Motion-based Alignment and Refinement for Offline Signal Synchronization across Devices","authors":"Manuel Meier, Christian Holz","doi":"10.1145/3596268","DOIUrl":"https://doi.org/10.1145/3596268","url":null,"abstract":"Fig. 1. BMAR is a novel method to synchronize and align signals across devices without the need for specific user input, action, or explicit synchronization through wired or wireless communication (e.g., WiFi or BLE). BMAR is capable of synchronizing (a) independently recorded signals after the fact by (b) first pre-aligning recordings using air pressure as an inexpensive sensing modality that simultaneously allows us to reject non-simultaneous recordings. (c) In a second step, BMAR produces a refined signal alignment across sensor devices by cross-correlating accelerometer observations.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"54 1","pages":"69:1-69:21"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76638047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yincheng Jin, Shibo Zhang, Yang Gao, Xuhai Xu, Seokmin Choi, Zhengxiong Li, H. J. Adler, Zhanpeng Jin
Sign language builds up an important bridge between the d/Deaf and hard-of-hearing (DHH) and hearing people. Regrettably, most hearing people face challenges in comprehending sign language, necessitating sign language translation. However, state-of-the-art wearable-based techniques mainly concentrate on recognizing manual markers (e.g
{"title":"SmartASL: \"Point-of-Care\" Comprehensive ASL Interpreter Using Wearables","authors":"Yincheng Jin, Shibo Zhang, Yang Gao, Xuhai Xu, Seokmin Choi, Zhengxiong Li, H. J. Adler, Zhanpeng Jin","doi":"10.1145/3596255","DOIUrl":"https://doi.org/10.1145/3596255","url":null,"abstract":"Sign language builds up an important bridge between the d/Deaf and hard-of-hearing (DHH) and hearing people. Regrettably, most hearing people face challenges in comprehending sign language, necessitating sign language translation. However, state-of-the-art wearable-based techniques mainly concentrate on recognizing manual markers (e.g","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"27 1","pages":"60:1-60:21"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75453074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the deployment of a growing number of smart home IoT devices, privacy leakage has become a growing concern. Prior work on privacy-invasive device localization, classification, and activity identification have proven the existence of various privacy leakage risks in smart home environments. However, they only demonstrate limited threats in real world due to many impractical assumptions, such as having privileged access to the user’s home network. In this paper, we identify a new end-to-end attack surface using IoTBeholder, a system that performs device localization, classification, and user activity identification. IoTBeholder can be easily run and replicated on commercial off-the-shelf (COTS) devices such as mobile phones or personal computers, enabling attackers to infer user’s habitual behaviors from smart home Wi-Fi traffic alone. We set up a testbed with 23 IoT devices for evaluation in the real world. The result shows that IoTBeholder has good device classification and device activity identification performance. In addition, IoTBeholder can infer the users’ habitual behaviors and automation rules with high accuracy and interpretability. It can even accurately predict the users’ future actions, highlighting a significant threat to user privacy that IoT vendors and users should highly concern.
{"title":"IoTBeholder: A Privacy Snooping Attack on User Habitual Behaviors from Smart Home Wi-Fi Traffic","authors":"Qingsong Zou, Peng Cheng, LI Qing, Liao Ruoyu, Yucheng Huang, Jingyu Xiao, Yong Jiang, Qingsong Zou, Qing Li, Ruoyu Li, Yu-Chung Huang, Gareth Tyson, Jingyu Xiao","doi":"10.1145/3580890","DOIUrl":"https://doi.org/10.1145/3580890","url":null,"abstract":"With the deployment of a growing number of smart home IoT devices, privacy leakage has become a growing concern. Prior work on privacy-invasive device localization, classification, and activity identification have proven the existence of various privacy leakage risks in smart home environments. However, they only demonstrate limited threats in real world due to many impractical assumptions, such as having privileged access to the user’s home network. In this paper, we identify a new end-to-end attack surface using IoTBeholder, a system that performs device localization, classification, and user activity identification. IoTBeholder can be easily run and replicated on commercial off-the-shelf (COTS) devices such as mobile phones or personal computers, enabling attackers to infer user’s habitual behaviors from smart home Wi-Fi traffic alone. We set up a testbed with 23 IoT devices for evaluation in the real world. The result shows that IoTBeholder has good device classification and device activity identification performance. In addition, IoTBeholder can infer the users’ habitual behaviors and automation rules with high accuracy and interpretability. It can even accurately predict the users’ future actions, highlighting a significant threat to user privacy that IoT vendors and users should highly concern.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"43 1","pages":"43:1-43:26"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79377553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Activity recognition using video data is widely adopted for elder care, monitoring for safety and security, and home automation. Unfortunately, using video data as the basis for activity recognition can be brittle, since models trained on video are often not robust to certain environmental changes, such as camera angle and lighting changes. There has been a proliferation of network-connected devices in home environments. Interactions with these smart devices are associated with network activity, making network data a potential source for recognizing these device interactions. This paper advocates for the synthesis of video and network data for robust interaction recognition in connected environments. We consider machine learning-based approaches for activity recognition, where each labeled activity is associated with both a video capture and an accompanying network traffic trace. We develop a simple but effective framework AMIR (Active Multimodal Interaction Recognition) 1 that trains independent models for video and network activity recognition respectively, and subsequently combines the predictions from these models using a meta-learning framework. Whether in lab or at home, this approach reduces the amount of “paired” demonstrations needed to perform accurate activity recognition, where both network and video data are collected simultaneously. Specifically, the method we have developed requires up to 70.83% fewer samples to achieve 85% F1 score than random data collection, and improves accuracy by 17.76% given the same number of samples. CCS
{"title":"AMIR: Active Multimodal Interaction Recognition from Video and Network Traffic in Connected Environments","authors":"Shinan Liu","doi":"10.1145/3580818","DOIUrl":"https://doi.org/10.1145/3580818","url":null,"abstract":"Activity recognition using video data is widely adopted for elder care, monitoring for safety and security, and home automation. Unfortunately, using video data as the basis for activity recognition can be brittle, since models trained on video are often not robust to certain environmental changes, such as camera angle and lighting changes. There has been a proliferation of network-connected devices in home environments. Interactions with these smart devices are associated with network activity, making network data a potential source for recognizing these device interactions. This paper advocates for the synthesis of video and network data for robust interaction recognition in connected environments. We consider machine learning-based approaches for activity recognition, where each labeled activity is associated with both a video capture and an accompanying network traffic trace. We develop a simple but effective framework AMIR (Active Multimodal Interaction Recognition) 1 that trains independent models for video and network activity recognition respectively, and subsequently combines the predictions from these models using a meta-learning framework. Whether in lab or at home, this approach reduces the amount of “paired” demonstrations needed to perform accurate activity recognition, where both network and video data are collected simultaneously. Specifically, the method we have developed requires up to 70.83% fewer samples to achieve 85% F1 score than random data collection, and improves accuracy by 17.76% given the same number of samples. CCS","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"4 1","pages":"21:1-21:26"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73056372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Lambrichts, Raf Ramakers, S. Hodges, J. Devine, L. Underwood, J. Finney
{"title":"CircuitGIue: A Software Configurable Converter for Interconnecting Multiple Heterogeneous Electronic Components","authors":"M. Lambrichts, Raf Ramakers, S. Hodges, J. Devine, L. Underwood, J. Finney","doi":"10.1145/3596265","DOIUrl":"https://doi.org/10.1145/3596265","url":null,"abstract":"","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"8 1","pages":"63:1-63:30"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80487777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless earphones are pervasive acoustic sensing platforms that can be used for many applications such as motion tracking and handwriting input. However, wireless earphones suffer clock offset between the connected smart devices, which would accumulate error rapidly over time. Moreover, compared with smartphone and voice assistants, the acoustic signal transmitted by wireless earphone is much weaker due to the poor frequency response. In this paper, we propose MagSound, which uses the built-in magnets to improve the tracking and acoustic sensing performance of Commercial-Off-The-Shelf (COTS) earphones. Leveraging magnetic field strength, MagSound can predict the position of wireless earphones free from clock offset, which can be used to re-calibrate the acoustic tracking. Further, the fusion of the two modalities mitigates the accumulated clock offset and multipath effect. Besides, to increase the robustness to noise, MagSound employs finely designed Orthogonal Frequency-Division Multiplexing (OFDM) ranging signals. We implement a prototype of MagSound on COTS and perform experiments for tracking and handwriting input. Results demonstrate that MagSound maintains millimeter-level error in 2D tracking, and improves the handwriting recognition accuracy by 49.81%. We believe that MagSound can contribute to practical applications of wireless earphones-based sensing.
{"title":"MagSound: Magnetic Field Assisted Wireless Earphone Tracking","authors":"Lihao Wang, Wei Wang, Haipeng Dai, Shizhe Liu","doi":"10.1145/3580889","DOIUrl":"https://doi.org/10.1145/3580889","url":null,"abstract":"Wireless earphones are pervasive acoustic sensing platforms that can be used for many applications such as motion tracking and handwriting input. However, wireless earphones suffer clock offset between the connected smart devices, which would accumulate error rapidly over time. Moreover, compared with smartphone and voice assistants, the acoustic signal transmitted by wireless earphone is much weaker due to the poor frequency response. In this paper, we propose MagSound, which uses the built-in magnets to improve the tracking and acoustic sensing performance of Commercial-Off-The-Shelf (COTS) earphones. Leveraging magnetic field strength, MagSound can predict the position of wireless earphones free from clock offset, which can be used to re-calibrate the acoustic tracking. Further, the fusion of the two modalities mitigates the accumulated clock offset and multipath effect. Besides, to increase the robustness to noise, MagSound employs finely designed Orthogonal Frequency-Division Multiplexing (OFDM) ranging signals. We implement a prototype of MagSound on COTS and perform experiments for tracking and handwriting input. Results demonstrate that MagSound maintains millimeter-level error in 2D tracking, and improves the handwriting recognition accuracy by 49.81%. We believe that MagSound can contribute to practical applications of wireless earphones-based sensing.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"34 1","pages":"33:1-33:32"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81588437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present HyWay , short for “ Hy brid Hall way ”, to enable mingling and informal interactions among physical and virtual users, in casual spaces and settings, such as office water cooler areas, conference hallways, trade show floors, and more. We call out how the hybrid and unstructured (or semi-structured) nature of such settings set these apart from the all-virtual and/or structured settings considered in prior work. Key to the design of HyWay is bridging the awareness gap between physical and virtual users, and providing the virtual users the same agency as physical users. To this end, we have designed HyWay to incorporate reciprocity (users can see and hear others only if they can be seen and heard), porosity (conversations in physical space are porous and not within airtight compartments), and agency (the ability for users to seamlessly move between conversations). We present our implementation of HyWay and the user survey findings from multiple deployments in unstructured settings (e.g., social gatherings), and semi-structured ones (e.g., a poster event). Results from these deployments show that HyWay enables effective mingling between physical and virtual users. CCS Concepts
HyWay是“Hy bridge Hall way”的缩写,可以在办公饮水机区域、会议走廊、贸易展览大厅等休闲空间和环境中实现物理和虚拟用户之间的混合和非正式互动。我们指出这种设置的混合和非结构化(或半结构化)性质如何将这些设置与之前工作中考虑的全虚拟和/或结构化设置区分开来。HyWay设计的关键是弥合物理用户和虚拟用户之间的意识差距,并为虚拟用户提供与物理用户相同的代理。为此,我们设计了HyWay,将互惠性(用户只有在能够看到和听到他人的情况下才能看到和听到他人的声音)、孔隙性(物理空间中的对话是多孔的,而不是在密闭的隔间内)和代理(用户在对话之间无缝移动的能力)结合起来。我们展示了HyWay的实现以及在非结构化环境(例如,社交聚会)和半结构化环境(例如,海报活动)中的多个部署的用户调查结果。这些部署的结果表明,HyWay可以有效地混合物理用户和虚拟用户。CCS的概念
{"title":"HyWay: Enabling Mingling in the Hybrid World","authors":"Harsh Vijay, Saumay Pushp, Amish Mittal, Praveen Gupta, Meghna Gupta, Sirish Gambhira, Shivang Chopra, Mayank Baranwal, Arshia Arya, Ajay Manchepalli, V. Padmanabhan","doi":"10.1145/3596235","DOIUrl":"https://doi.org/10.1145/3596235","url":null,"abstract":"We present HyWay , short for “ Hy brid Hall way ”, to enable mingling and informal interactions among physical and virtual users, in casual spaces and settings, such as office water cooler areas, conference hallways, trade show floors, and more. We call out how the hybrid and unstructured (or semi-structured) nature of such settings set these apart from the all-virtual and/or structured settings considered in prior work. Key to the design of HyWay is bridging the awareness gap between physical and virtual users, and providing the virtual users the same agency as physical users. To this end, we have designed HyWay to incorporate reciprocity (users can see and hear others only if they can be seen and heard), porosity (conversations in physical space are porous and not within airtight compartments), and agency (the ability for users to seamlessly move between conversations). We present our implementation of HyWay and the user survey findings from multiple deployments in unstructured settings (e.g., social gatherings), and semi-structured ones (e.g., a poster event). Results from these deployments show that HyWay enables effective mingling between physical and virtual users. CCS Concepts","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"76 1","pages":"77:1-77:33"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83828199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most touch-based input devices, such as touchscreens and touchpads, capture low-resolution capacitive images when a finger touches the device’s surface. These devices only output the two-dimensional (2D) positions of contacting points, which are insufficient for complex control tasks, such as the manipulation of 3D objects. To expand the modalities of touch inputs, researchers have proposed a variety of techniques, including finger poses, chording gestures, touch pressure, etc. With the rapid development of fingerprint sensing technology, especially under-screen fingerprint sensors, it has become possible to generate input commands to control multiple degrees of freedom (DOF) at a time using fingerprint images. In this paper, we propose PrintShear, a shear input technique based on fingerprint deformation. Lateral, longitudinal and rotational deformations are extracted from fingerprint images and mapped to 3DOF control commands. Further DOF expansion can be achieved through recognition of the contact region of the touching finger. We conducted a 12-person user study to evaluate the performance of PrintShear on 3D docking tasks. Comparisons with other input methods demonstrated the superiority of our approach. Specifically, a 19.79% reduction in completion time was achieved compared with conventional touch input in a full 6DOF 3D object manipulation task.
{"title":"PrintShear: Shear Input Based on Fingerprint Deformation","authors":"Jinyang Yu, Jianjiang Feng, Jie Zhou","doi":"10.1145/3596257","DOIUrl":"https://doi.org/10.1145/3596257","url":null,"abstract":"Most touch-based input devices, such as touchscreens and touchpads, capture low-resolution capacitive images when a finger touches the device’s surface. These devices only output the two-dimensional (2D) positions of contacting points, which are insufficient for complex control tasks, such as the manipulation of 3D objects. To expand the modalities of touch inputs, researchers have proposed a variety of techniques, including finger poses, chording gestures, touch pressure, etc. With the rapid development of fingerprint sensing technology, especially under-screen fingerprint sensors, it has become possible to generate input commands to control multiple degrees of freedom (DOF) at a time using fingerprint images. In this paper, we propose PrintShear, a shear input technique based on fingerprint deformation. Lateral, longitudinal and rotational deformations are extracted from fingerprint images and mapped to 3DOF control commands. Further DOF expansion can be achieved through recognition of the contact region of the touching finger. We conducted a 12-person user study to evaluate the performance of PrintShear on 3D docking tasks. Comparisons with other input methods demonstrated the superiority of our approach. Specifically, a 19.79% reduction in completion time was achieved compared with conventional touch input in a full 6DOF 3D object manipulation task.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"20 1","pages":"81:1-81:22"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76922116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Silent speech recognition (SSR) allows users to speak to the device without making a sound, avoiding being overheard or disturbing others. Compared to the video-based approach, wireless signal-based SSR can work when the user is wearing a mask and has fewer privacy concerns. However, previous wireless-based systems are still far from well-studied, e.g. they are only evaluated in corpus with highly limited size, making them only feasible for interaction with dozens of deterministic commands. In this paper, we present mSilent, a millimeter-wave (mmWave) based SSR system that can work in the general corpus containing thousands of daily conversation sentences. With the strong recognition capability, mSilent not only supports the more complex interaction with assistants, but also enables more general applications in daily life such as communication and input. To extract fine-grained articulatory features, we build a signal processing pipeline that uses a clustering-selection algorithm to separate articulatory gestures and generates a multi-scale detrended spectrogram (MSDS). To handle the complexity of the general corpus, we design an end-to-end deep neural network that consists of a multi-branch convolutional front-end and a Transformer-based sequence-to-sequence back-end. We collect a general corpus dataset of 1,000 daily conversation sentences that contains 21K samples of bi-modality data (mmWave and video). Our evaluation shows that mSilent achieves a 9.5% average word error rate (WER) at a distance of 1.5m, which is comparable to the performance of the state-of-the-art video-based approach. We also explore deploying mSilent in two typical scenarios of text entry and in-car assistant, and the less than 6% average WER demonstrates the potential of mSilent in general daily applications. CCS Concepts: • Human-centered computing → Ubiquitous and mobile computing systems and tools ;
{"title":"mSilent: Towards General Corpus Silent Speech Recognition Using COTS mmWave Radar","authors":"Shangcui Zeng, Hao Wan, Shuyu Shi, Wei Wang","doi":"10.1145/3580838","DOIUrl":"https://doi.org/10.1145/3580838","url":null,"abstract":"Silent speech recognition (SSR) allows users to speak to the device without making a sound, avoiding being overheard or disturbing others. Compared to the video-based approach, wireless signal-based SSR can work when the user is wearing a mask and has fewer privacy concerns. However, previous wireless-based systems are still far from well-studied, e.g. they are only evaluated in corpus with highly limited size, making them only feasible for interaction with dozens of deterministic commands. In this paper, we present mSilent, a millimeter-wave (mmWave) based SSR system that can work in the general corpus containing thousands of daily conversation sentences. With the strong recognition capability, mSilent not only supports the more complex interaction with assistants, but also enables more general applications in daily life such as communication and input. To extract fine-grained articulatory features, we build a signal processing pipeline that uses a clustering-selection algorithm to separate articulatory gestures and generates a multi-scale detrended spectrogram (MSDS). To handle the complexity of the general corpus, we design an end-to-end deep neural network that consists of a multi-branch convolutional front-end and a Transformer-based sequence-to-sequence back-end. We collect a general corpus dataset of 1,000 daily conversation sentences that contains 21K samples of bi-modality data (mmWave and video). Our evaluation shows that mSilent achieves a 9.5% average word error rate (WER) at a distance of 1.5m, which is comparable to the performance of the state-of-the-art video-based approach. We also explore deploying mSilent in two typical scenarios of text entry and in-car assistant, and the less than 6% average WER demonstrates the potential of mSilent in general daily applications. CCS Concepts: • Human-centered computing → Ubiquitous and mobile computing systems and tools ;","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"3 1","pages":"39:1-39:28"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79988224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
and a tablet. Eggly uses novel augmented reality (AR) techniques to offer engagement and personalization, enhancing their training experience. We conducted two field studies (a single-session study and a three-week multi-session study) with a total of five autistic children to assess Eggly in practice at a special education center. Both quantitative and qualitative results indicate the effectiveness of the approach as well as contribute to the design knowledge of creating mobile AR NFT games.
{"title":"Eggly: Designing Mobile Augmented Reality Neurofeedback Training Games for Children with Autism Spectrum Disorder","authors":"Yue Lyu, Huan Zhang, Keiko Katsuragawa, J. Zhao","doi":"10.1145/3596251","DOIUrl":"https://doi.org/10.1145/3596251","url":null,"abstract":"and a tablet. Eggly uses novel augmented reality (AR) techniques to offer engagement and personalization, enhancing their training experience. We conducted two field studies (a single-session study and a three-week multi-session study) with a total of five autistic children to assess Eggly in practice at a special education center. Both quantitative and qualitative results indicate the effectiveness of the approach as well as contribute to the design knowledge of creating mobile AR NFT games.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"20 1","pages":"67:1-67:29"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72554583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}