首页 > 最新文献

Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.最新文献

英文 中文
EarlyScreen: Multi-scale Instance Fusion for Predicting Neural Activation and Psychopathology in Preschool Children EarlyScreen:多尺度实例融合预测学龄前儿童神经激活和精神病理
Pub Date : 2022-01-01 DOI: 10.1145/3534583
Manasa Kalanadhabhatta, Adrelys Mateo Santana, Zhongyang Zhang, Deepa Ganesan, Adam S. Grabell, Tauhidur Rahman
Emotion dysregulation in early childhood is known to be associated with a higher risk of several psychopathological conditions, such as ADHD and mood and anxiety disorders. In developmental neuroscience research, emotion dysregulation is characterized by low neural activation in the prefrontal cortex during frustration. In this work, we report on an exploratory study with 94 participants aged 3.5 to 5 years, investigating whether behavioral measures automatically extracted from facial videos can predict frustration-related neural activation and differentiate between low- and high-risk individuals. We propose a novel multi-scale instance fusion framework to develop EarlyScreen – a set of classifiers trained on behavioral markers during emotion regulation. Our model successfully predicts activation levels in the prefrontal cortex with an area under the receiver operating characteristic (ROC) curve of 0.85, which is on par with widely-used clinical assessment tools. Further, we classify clinical and non-clinical subjects based on their psychopathological risk with an area under the ROC curve of 0.80. Our model’s predictions are consistent with standardized psychometric assessment scales, supporting its applicability as a screening procedure for emotion regulation-related psychopathological disorders. To the best of our knowledge, EarlyScreen is the first work to use automatically extracted behavioral features to characterize both neural activity and the diagnostic status of emotion regulation-related disorders in young children. We present insights from mental health professionals supporting the utility of EarlyScreen and discuss considerations for its subsequent deployment. CCS Concepts: • Human-centered computing → Ubiquitous and mobile computing systems and tools ; • Computing methodologies → Machine learning ; • Applied computing → Psychology Health informatics . Multi-scale Neural Psychopathology
众所周知,儿童早期的情绪失调与几种精神病理状况(如多动症、情绪和焦虑障碍)的高风险有关。在发育神经科学研究中,情绪失调的特征是沮丧时前额叶皮层的神经激活低。在这项工作中,我们报告了一项对94名年龄在3.5至5岁之间的参与者进行的探索性研究,调查从面部视频中自动提取的行为测量是否可以预测与挫折相关的神经激活,并区分低风险个体。我们提出了一种新的多尺度实例融合框架来开发EarlyScreen——一套在情绪调节过程中对行为标记进行训练的分类器。我们的模型成功地预测了前额皮质的激活水平,受试者工作特征(ROC)曲线下的面积为0.85,这与广泛使用的临床评估工具相当。此外,我们根据受试者的精神病理风险对临床和非临床受试者进行分类,ROC曲线下面积为0.80。我们的模型预测与标准化的心理测量评估量表一致,支持其作为情绪调节相关精神病理障碍筛选程序的适用性。据我们所知,EarlyScreen是第一个使用自动提取的行为特征来描述幼儿情绪调节相关疾病的神经活动和诊断状态的工作。我们介绍了支持EarlyScreen实用程序的心理健康专业人员的见解,并讨论了后续部署的考虑因素。CCS概念:•以人为中心的计算→无处不在的移动计算系统和工具;•计算方法→机器学习;•应用计算机→心理健康信息学。多尺度神经精神病理学
{"title":"EarlyScreen: Multi-scale Instance Fusion for Predicting Neural Activation and Psychopathology in Preschool Children","authors":"Manasa Kalanadhabhatta, Adrelys Mateo Santana, Zhongyang Zhang, Deepa Ganesan, Adam S. Grabell, Tauhidur Rahman","doi":"10.1145/3534583","DOIUrl":"https://doi.org/10.1145/3534583","url":null,"abstract":"Emotion dysregulation in early childhood is known to be associated with a higher risk of several psychopathological conditions, such as ADHD and mood and anxiety disorders. In developmental neuroscience research, emotion dysregulation is characterized by low neural activation in the prefrontal cortex during frustration. In this work, we report on an exploratory study with 94 participants aged 3.5 to 5 years, investigating whether behavioral measures automatically extracted from facial videos can predict frustration-related neural activation and differentiate between low- and high-risk individuals. We propose a novel multi-scale instance fusion framework to develop EarlyScreen – a set of classifiers trained on behavioral markers during emotion regulation. Our model successfully predicts activation levels in the prefrontal cortex with an area under the receiver operating characteristic (ROC) curve of 0.85, which is on par with widely-used clinical assessment tools. Further, we classify clinical and non-clinical subjects based on their psychopathological risk with an area under the ROC curve of 0.80. Our model’s predictions are consistent with standardized psychometric assessment scales, supporting its applicability as a screening procedure for emotion regulation-related psychopathological disorders. To the best of our knowledge, EarlyScreen is the first work to use automatically extracted behavioral features to characterize both neural activity and the diagnostic status of emotion regulation-related disorders in young children. We present insights from mental health professionals supporting the utility of EarlyScreen and discuss considerations for its subsequent deployment. CCS Concepts: • Human-centered computing → Ubiquitous and mobile computing systems and tools ; • Computing methodologies → Machine learning ; • Applied computing → Psychology Health informatics . Multi-scale Neural Psychopathology","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"53 1","pages":"60:1-60:39"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86518460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
MiniKers: Interaction-Powered Smart Environment Automation MiniKers:交互驱动的智能环境自动化
Pub Date : 2022-01-01 DOI: 10.1145/3550287
Xiaoying Yang, Jacob Sayono, Jess Xu, Jiahao Li, Josiah D. Hester, Yang Zhang
Automating operations of objects has made life easier and more convenient for billions of people, especially those with limited motor capabilities. On the other hand, even able-bodied users might not always be able to perform manual operations (e.g., both hands are occupied), and manual operations might be undesirable for hygiene purposes (e.g., contactless devices). As a result, automation systems like motion-triggered doors, remote-control window shades, contactless toilet lids have become increasingly popular in private and public environments. Yet, these systems are hampered by complex building wiring or short battery lifetimes, negating their positive benefits for accessibility, energy saving, healthcare, and other domains. In this paper we explore how these types of objects can be powered in perpetuity by the energy generated from a unique energy source – user interactions, specifically, the manual manipulations of objects by users who can afford them when they can afford them. Our assumption is that users’ capabilities for object operations are heterogeneous, there are desires for both manual and automatic operations in most environments, and that automatic operations are often not needed as frequently – for example, an automatic door in a public space is often manually opened many times before a need for automatic operation shows up. The energy harvested by those manual operations would be sufficient to power that one automatic operation. We instantiate this idea by upcycling common everyday objects with devices which have various mechanical designs powered by a general-purpose backbone embedded system. We call these devices, MiniKers . We built a custom driver circuit that can enable motor mechanisms to toggle between generating powers (i.e., manual operation) and actuating objects (i.e., automatic operation). We designed a wide variety of mechanical mechanisms to retrofit existing objects and evaluated our system with a 48-hour deployment study, which proves the efficacy of MiniKers as well as shedding light into this people-as-power approach as a feasible solution to address energy needed for smart environment automation.
物体的自动化操作使数十亿人的生活更容易、更方便,尤其是那些运动能力有限的人。另一方面,即使是身体健全的用户也可能并不总是能够执行手动操作(例如,双手被占用),并且出于卫生目的(例如,非接触式设备),手动操作可能是不可取的。因此,运动触发门、遥控窗帘、非接触式马桶盖等自动化系统在私人和公共环境中越来越受欢迎。然而,这些系统受到复杂的建筑布线或较短的电池寿命的阻碍,从而抵消了它们在可访问性、节能、医疗保健和其他领域的积极好处。在这篇论文中,我们探讨了这些类型的物体是如何通过一种独特的能源——用户交互——产生的能量来永久供电的,具体来说,就是用户在买得起的时候对物体进行手动操作。我们的假设是,用户对对象操作的能力是异构的,在大多数环境中都需要手动和自动操作,并且通常不需要频繁地进行自动操作—例如,在需要自动操作之前,公共空间中的自动门通常需要手动打开许多次。这些人工操作所获得的能量足以为一次自动操作提供动力。我们通过升级利用由通用骨干嵌入式系统驱动的各种机械设计的设备来实现这一想法。我们称这些设备为MiniKers。我们构建了一个定制的驱动电路,可以使电机机构在发电(即手动操作)和驱动对象(即自动操作)之间切换。我们设计了各种各样的机械机制来改造现有的物体,并通过48小时的部署研究评估了我们的系统,这证明了MiniKers的有效性,并为这种以人为动力的方法提供了启发,作为解决智能环境自动化所需能源的可行解决方案。
{"title":"MiniKers: Interaction-Powered Smart Environment Automation","authors":"Xiaoying Yang, Jacob Sayono, Jess Xu, Jiahao Li, Josiah D. Hester, Yang Zhang","doi":"10.1145/3550287","DOIUrl":"https://doi.org/10.1145/3550287","url":null,"abstract":"Automating operations of objects has made life easier and more convenient for billions of people, especially those with limited motor capabilities. On the other hand, even able-bodied users might not always be able to perform manual operations (e.g., both hands are occupied), and manual operations might be undesirable for hygiene purposes (e.g., contactless devices). As a result, automation systems like motion-triggered doors, remote-control window shades, contactless toilet lids have become increasingly popular in private and public environments. Yet, these systems are hampered by complex building wiring or short battery lifetimes, negating their positive benefits for accessibility, energy saving, healthcare, and other domains. In this paper we explore how these types of objects can be powered in perpetuity by the energy generated from a unique energy source – user interactions, specifically, the manual manipulations of objects by users who can afford them when they can afford them. Our assumption is that users’ capabilities for object operations are heterogeneous, there are desires for both manual and automatic operations in most environments, and that automatic operations are often not needed as frequently – for example, an automatic door in a public space is often manually opened many times before a need for automatic operation shows up. The energy harvested by those manual operations would be sufficient to power that one automatic operation. We instantiate this idea by upcycling common everyday objects with devices which have various mechanical designs powered by a general-purpose backbone embedded system. We call these devices, MiniKers . We built a custom driver circuit that can enable motor mechanisms to toggle between generating powers (i.e., manual operation) and actuating objects (i.e., automatic operation). We designed a wide variety of mechanical mechanisms to retrofit existing objects and evaluated our system with a 48-hour deployment study, which proves the efficacy of MiniKers as well as shedding light into this people-as-power approach as a feasible solution to address energy needed for smart environment automation.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"7 1","pages":"149:1-149:22"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78626382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
AccMyrinx: Speech Synthesis with Non-Acoustic Sensor AccMyrinx:非声学传感器语音合成
Pub Date : 2022-01-01 DOI: 10.1145/3550338
Yunji Liang, Yuchen Qin, Qi Li, Xiaokai Yan, Zhiwen Yu, Bin Guo, S. Samtani, Yanyong Zhang
The built-in loudspeakers of mobile devices (e.g., smartphones, smartwatches, and tablets) play significant roles in human-machine interaction, such as playing music, making phone calls, and enabling voice-based interaction. Prior studies have pointed out that it is feasible to eavesdrop on the speaker via motion sensors, but whether it is possible to synthesize speech from non-acoustic signals with sub-Nyquist sampling frequency has not been studied. In this paper, we present an end-to-end model to reconstruct the acoustic waveforms that are playing on the loudspeaker through the vibration captured by the built-in accelerometer. Specifically, we present an end-to-end speech synthesis framework dubbed AccMyrinx to eavesdrop on the speaker using the built-in low-resolution accelerometer of mobile devices. AccMyrinx takes advantage of the coexistence of an accelerometer with the loudspeaker on the same motherboard and compromises the loudspeaker by the solid-borne vibrations captured by the accelerometer. Low-resolution vibration signals are fed to a wavelet-based MelGAN to generate intelligible acoustic waveforms. We conducted extensive experiments on a large-scale dataset created based on audio clips downloaded from Voice of America (VOA). The experimental results show that AccMyrinx is capable of reconstructing intelligible acoustic signals that are playing on the loudspeaker with a smoothed word error rate (SWER) of 42.67%. The quality of synthesized speeches could be severely affected by several factors including gender, speech rate, and volume.
移动设备(例如智能手机、智能手表和平板电脑)的内置扬声器在人机交互中发挥着重要作用,例如播放音乐、拨打电话和实现基于语音的交互。先前的研究指出,通过运动传感器窃听说话人是可行的,但是否有可能从亚奈奎斯特采样频率的非声学信号合成语音还没有研究。在本文中,我们提出了一个端到端模型,通过内置加速度计捕获的振动来重建扬声器上播放的声波波形。具体来说,我们提出了一个端到端语音合成框架,称为AccMyrinx,使用移动设备内置的低分辨率加速度计来窃听说话者。AccMyrinx利用了加速度计与扬声器在同一主板上共存的优势,并通过加速度计捕获的固体振动来折衷扬声器。低分辨率的振动信号被送入基于小波的MelGAN,以产生可理解的声波波形。我们在一个基于从美国之音(VOA)下载的音频片段创建的大规模数据集上进行了广泛的实验。实验结果表明,AccMyrinx能够重建扬声器上播放的可理解声信号,平滑词错误率(SWER)为42.67%。合成语音的质量会受到性别、语速和音量等因素的严重影响。
{"title":"AccMyrinx: Speech Synthesis with Non-Acoustic Sensor","authors":"Yunji Liang, Yuchen Qin, Qi Li, Xiaokai Yan, Zhiwen Yu, Bin Guo, S. Samtani, Yanyong Zhang","doi":"10.1145/3550338","DOIUrl":"https://doi.org/10.1145/3550338","url":null,"abstract":"The built-in loudspeakers of mobile devices (e.g., smartphones, smartwatches, and tablets) play significant roles in human-machine interaction, such as playing music, making phone calls, and enabling voice-based interaction. Prior studies have pointed out that it is feasible to eavesdrop on the speaker via motion sensors, but whether it is possible to synthesize speech from non-acoustic signals with sub-Nyquist sampling frequency has not been studied. In this paper, we present an end-to-end model to reconstruct the acoustic waveforms that are playing on the loudspeaker through the vibration captured by the built-in accelerometer. Specifically, we present an end-to-end speech synthesis framework dubbed AccMyrinx to eavesdrop on the speaker using the built-in low-resolution accelerometer of mobile devices. AccMyrinx takes advantage of the coexistence of an accelerometer with the loudspeaker on the same motherboard and compromises the loudspeaker by the solid-borne vibrations captured by the accelerometer. Low-resolution vibration signals are fed to a wavelet-based MelGAN to generate intelligible acoustic waveforms. We conducted extensive experiments on a large-scale dataset created based on audio clips downloaded from Voice of America (VOA). The experimental results show that AccMyrinx is capable of reconstructing intelligible acoustic signals that are playing on the loudspeaker with a smoothed word error rate (SWER) of 42.67%. The quality of synthesized speeches could be severely affected by several factors including gender, speech rate, and volume.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"60 1","pages":"127:1-127:24"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75016533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
RFCam: Uncertainty-aware Fusion of Camera and Wi-Fi for Real-time Human Identification with Mobile Devices RFCam:相机与Wi-Fi的不确定性感知融合,用于移动设备的实时人体识别
Pub Date : 2022-01-01 DOI: 10.1145/3534588
Hongkai Chen, Sirajum Munir, Shane Lin
As cameras and Wi-Fi access points are widely deployed in public places, new mobile applications and services can be developed by connecting live video analytics to the mobile Wi-Fi-enabled devices of the relevant users. To achieve this, a critical challenge is to identify the person who carries a device in the video with the mobile device’s network ID, e.g., MAC address. To address this issue, we propose RFCam, a system for human identification with a fusion of Wi-Fi and camera data . RFCam uses a multi-antenna Wi-Fi radio to collect CSI of Wi-Fi packets sent by mobile devices, and a camera to monitor users in the area. With low sampling rate CSI data, RFCam derives heterogeneous embedding features on location, motion, and user activity for each device over time, and fuses them with visual user features generated from video analytics to find the best matches. To mitigate the impacts of multi-user environments on wireless sensing, we develop video-assisted learning models for different features and quantify their uncertainties, and incorporate them with video analytics to rank moments and features for robust and efficient fusion. RFCam is implemented and tested in indoor environments for over 800minutes with 25 volunteers, and extensive evaluation results demonstrate that RFCam achieves real-time identification average accuracy of 97 . 01% in all experiments with up to ten users, significantly outperforming existing solutions. and how to estimate different features in device profiles with uncertainties quantification. Then, we describe the visual sensing of user profiles with an uncertainty-aware video analytics approach to identify contextually important moments.
随着摄像头和Wi-Fi接入点在公共场所的广泛部署,通过将实时视频分析连接到相关用户的支持Wi-Fi的移动设备,可以开发新的移动应用程序和服务。为了实现这一点,一个关键的挑战是识别视频中携带移动设备的人的移动设备的网络ID,例如MAC地址。为了解决这个问题,我们提出了RFCam,一个融合Wi-Fi和摄像头数据的人体识别系统。RFCam使用一个多天线Wi-Fi无线电来收集移动设备发送的Wi-Fi数据包的CSI,并使用一个摄像头来监控该地区的用户。通过低采样率的CSI数据,RFCam可以从每台设备的位置、运动和用户活动中提取异构嵌入特征,并将它们与视频分析生成的视觉用户特征融合在一起,以找到最佳匹配。为了减轻多用户环境对无线传感的影响,我们针对不同的特征开发了视频辅助学习模型,量化了它们的不确定性,并将它们与视频分析结合起来,对矩和特征进行排序,以实现鲁棒和高效的融合。RFCam在25名志愿者的室内环境下进行了超过800分钟的测试,广泛的评估结果表明,RFCam的实时识别平均准确率为97。在最多10个用户的所有实验中占比为1%,显著优于现有解决方案。以及如何用不确定度量化来估计器件轮廓的不同特征。然后,我们用不确定性感知视频分析方法描述用户配置文件的视觉感知,以识别上下文重要时刻。
{"title":"RFCam: Uncertainty-aware Fusion of Camera and Wi-Fi for Real-time Human Identification with Mobile Devices","authors":"Hongkai Chen, Sirajum Munir, Shane Lin","doi":"10.1145/3534588","DOIUrl":"https://doi.org/10.1145/3534588","url":null,"abstract":"As cameras and Wi-Fi access points are widely deployed in public places, new mobile applications and services can be developed by connecting live video analytics to the mobile Wi-Fi-enabled devices of the relevant users. To achieve this, a critical challenge is to identify the person who carries a device in the video with the mobile device’s network ID, e.g., MAC address. To address this issue, we propose RFCam, a system for human identification with a fusion of Wi-Fi and camera data . RFCam uses a multi-antenna Wi-Fi radio to collect CSI of Wi-Fi packets sent by mobile devices, and a camera to monitor users in the area. With low sampling rate CSI data, RFCam derives heterogeneous embedding features on location, motion, and user activity for each device over time, and fuses them with visual user features generated from video analytics to find the best matches. To mitigate the impacts of multi-user environments on wireless sensing, we develop video-assisted learning models for different features and quantify their uncertainties, and incorporate them with video analytics to rank moments and features for robust and efficient fusion. RFCam is implemented and tested in indoor environments for over 800minutes with 25 volunteers, and extensive evaluation results demonstrate that RFCam achieves real-time identification average accuracy of 97 . 01% in all experiments with up to ten users, significantly outperforming existing solutions. and how to estimate different features in device profiles with uncertainties quantification. Then, we describe the visual sensing of user profiles with an uncertainty-aware video analytics approach to identify contextually important moments.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"28 1","pages":"47:1-47:29"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81105008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
MetaGanFi: Cross-Domain Unseen Individual Identification Using WiFi Signals MetaGanFi:基于WiFi信号的跨域不可见个人识别
Pub Date : 2022-01-01 DOI: 10.1145/3550306
Jin Zhang, Zhuangzhuang Chen, Chengwen Luo, Bo Wei, S. Kanhere, Jian-qiang Li
Human has an unique gait and prior works show increasing potentials in using WiFi signals to capture the unique signature of individuals’ gait. However, existing WiFi-based human identification (HI) systems have not been ready for real-world deployment due to various strong assumptions including identification of known users and sufficient training data captured in predefined domains such as fixed walking trajectory/orientation, WiFi layout (receivers locations) and multipath environment (deployment time and site). In this paper, we propose a WiFi-based HI system, MetaGanFi, which is able to accurately identify unseen individuals in uncontrolled domain with only one or few samples. To achieve this, the MetaGanFi proposes a domain unification model, CCG-GAN that utilizes a conditional cycle generative adversarial networks to filter out irrelevant perturbations incurred by interfering domains. Moreover, the MetaGanFi proposes a domain-agnostic meta learning model, DA-Meta that could quickly adapt from one/few data samples to accurately recognize unseen individuals. The comprehensive evaluation applied on a real-world dataset show that the MetaGanFi can identify unseen individuals with average accuracies of 87.25% and 93.50% for 1 and 5 available data samples (shot) cases, captured in varying trajectory and multipath environment, 86.84% and 91.25% for 1 and 5-shot cases in varying WiFi layout scenarios, while the overall inference process of domain unification and identification takes about 0.1 second per sample.
人类具有独特的步态,先前的研究表明,利用WiFi信号捕捉个体独特的步态特征具有越来越大的潜力。然而,现有的基于WiFi的人类识别(HI)系统还没有为现实世界的部署做好准备,因为存在各种强假设,包括识别已知用户和在预定义领域(如固定的行走轨迹/方向、WiFi布局(接收器位置)和多路径环境(部署时间和地点)中捕获的足够的训练数据。在本文中,我们提出了一种基于wifi的HI系统MetaGanFi,该系统仅使用一个或几个样本即可准确识别非受控域中的未见个体。为了实现这一目标,MetaGanFi提出了一种域统一模型CCG-GAN,该模型利用条件循环生成对抗网络来过滤掉由干扰域引起的不相关扰动。此外,MetaGanFi提出了一个领域不可知论的元学习模型,DA-Meta,可以快速从一个/几个数据样本中适应,以准确识别未见过的个体。在一个真实数据集上的综合评价表明,MetaGanFi在不同轨迹和多路径环境下捕获的1和5个可用数据样本(镜头)情况下识别未见个体的平均准确率为87.25%和93.50%,在不同WiFi布局场景下捕获的1和5个可用数据样本(镜头)情况下识别未见个体的平均准确率为86.84%和91.25%,而域统一和识别的整体推理过程约为0.1秒/样本。
{"title":"MetaGanFi: Cross-Domain Unseen Individual Identification Using WiFi Signals","authors":"Jin Zhang, Zhuangzhuang Chen, Chengwen Luo, Bo Wei, S. Kanhere, Jian-qiang Li","doi":"10.1145/3550306","DOIUrl":"https://doi.org/10.1145/3550306","url":null,"abstract":"Human has an unique gait and prior works show increasing potentials in using WiFi signals to capture the unique signature of individuals’ gait. However, existing WiFi-based human identification (HI) systems have not been ready for real-world deployment due to various strong assumptions including identification of known users and sufficient training data captured in predefined domains such as fixed walking trajectory/orientation, WiFi layout (receivers locations) and multipath environment (deployment time and site). In this paper, we propose a WiFi-based HI system, MetaGanFi, which is able to accurately identify unseen individuals in uncontrolled domain with only one or few samples. To achieve this, the MetaGanFi proposes a domain unification model, CCG-GAN that utilizes a conditional cycle generative adversarial networks to filter out irrelevant perturbations incurred by interfering domains. Moreover, the MetaGanFi proposes a domain-agnostic meta learning model, DA-Meta that could quickly adapt from one/few data samples to accurately recognize unseen individuals. The comprehensive evaluation applied on a real-world dataset show that the MetaGanFi can identify unseen individuals with average accuracies of 87.25% and 93.50% for 1 and 5 available data samples (shot) cases, captured in varying trajectory and multipath environment, 86.84% and 91.25% for 1 and 5-shot cases in varying WiFi layout scenarios, while the overall inference process of domain unification and identification takes about 0.1 second per sample.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"20 1","pages":"152:1-152:21"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77181300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
SAMoSA: Sensing Activities with Motion and Subsampled Audio SAMoSA:感应活动与运动和亚采样音频
Pub Date : 2022-01-01 DOI: 10.1145/3550284
Vimal Mollyn, Karan Ahuja, Dhruv Verma, Chris Harrison, Mayank Goel
Despite in and human activity recognition systems, a practical, power-efficient, and privacy-sensitive activity recognition system has remained elusive. State-of-the-art activity recognition systems often require power-hungry and privacy-invasive audio data. This is especially challenging for resource-constrained wearables, such as smartwatches. To counter the need audio-based activity system, we make use of compute-optimized IMUs sampled 50 Hz to act for detecting activity events. detected, multimodal deep augments the data captured on a smartwatch. subsample this 1 spoken unintelligible, power consumption on mobile devices. multimodal deep recognition of 92 2% 26 activities
尽管在人类活动识别系统中,一个实用的,节能的,隐私敏感的活动识别系统仍然是难以捉摸的。最先进的活动识别系统通常需要耗电和侵犯隐私的音频数据。这对于资源有限的可穿戴设备(如智能手表)来说尤其具有挑战性。为了满足对基于音频的活动系统的需求,我们利用计算机优化的采样50 Hz的imu来检测活动事件。检测到,多模态深度增强了智能手表上捕获的数据。子样本这1讲不清,在移动设备上耗电。多式联运深度识别92 2% 26项活动
{"title":"SAMoSA: Sensing Activities with Motion and Subsampled Audio","authors":"Vimal Mollyn, Karan Ahuja, Dhruv Verma, Chris Harrison, Mayank Goel","doi":"10.1145/3550284","DOIUrl":"https://doi.org/10.1145/3550284","url":null,"abstract":"Despite in and human activity recognition systems, a practical, power-efficient, and privacy-sensitive activity recognition system has remained elusive. State-of-the-art activity recognition systems often require power-hungry and privacy-invasive audio data. This is especially challenging for resource-constrained wearables, such as smartwatches. To counter the need audio-based activity system, we make use of compute-optimized IMUs sampled 50 Hz to act for detecting activity events. detected, multimodal deep augments the data captured on a smartwatch. subsample this 1 spoken unintelligible, power consumption on mobile devices. multimodal deep recognition of 92 2% 26 activities","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"18 1","pages":"132:1-132:19"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75962254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
ILLOC: In-Hall Localization with Standard LoRaWAN Uplink Frames ILLOC:基于标准LoRaWAN上行帧的厅内定位
Pub Date : 2022-01-01 DOI: 10.1145/3517245
Dongfang Guo, GU Chaojie, Linshan Jiang, W. Luo, Rui Tan, Dongfang Guo, Chaojie Gu, Linshan Jiang, W. Luo
LoRaWAN is a narrowband wireless technology for ubiquitous connectivity. For various applications, it is desirable to localize LoRaWAN devices based on their uplink frames that convey application data. This localization service operates in an unobtrusive manner, in that it requires no special software instrumentation to the LoRaWAN devices. This paper investigates the feasibility of unobtrusive localization for LoRaWAN devices in hall-size indoor spaces like warehouses, airport terminals, sports centers, museum halls, etc. We study the TDoA-based approach, which needs to address two challenges of poor timing performance of LoRaWAN narrowband signal and nanosecond-level clock synchronization among anchors. We propose the ILLOC system featuring two LoRaWAN-specific techniques: (1) the cross-correlation among the differential phase sequences received by two anchors to estimate TDoA and (2) the just-in-time synchronization enabled by a specially deployed LoRaWAN end device providing time reference upon detecting a target device’s transmission. In a long tunnel corridor, a 70 × 32m 2 sports hall, and a 110 × 70m 2 indoor plaza with extensive non-line-of-sight propagation paths, ILLOC achieves median localization errors of 6m (with 2 anchors), 8 . 36m (with 6 anchors), and 15 . 16m (with 6 anchors and frame fusion), respectively. The achieved accuracy makes ILLOC useful for applications including zone-level asset tracking, misplacement detection, airport trolley management, and cybersecurity enforcement like detecting impersonation attacks launched by remote radios. . the design and evaluation of a TDoA-based, unobtrusive in-hall LoRaWAN localization (ILLOC) system for any off-the-shelf LoRaWAN end devices. ILLOC deploys multiple anchors with known positions and estimates the position of an end device based on the anchors’ TDoA measurements regarding any single uplink frame from the end device. The anchors are based on software-defined radios (SDRs) to access the physical layer. We prototype ILLOC using both Universal Software Radio Peripheral (USRP) and LimeSDR that is about 10x cheaper than USRP as the anchor. The design of ILLOC the following
LoRaWAN是一种窄带无线技术,用于无处不在的连接。对于各种应用来说,基于传输应用数据的上行链路帧对LoRaWAN设备进行本地化是可取的。这种本地化服务以一种不引人注目的方式运行,因为它不需要对LoRaWAN设备进行特殊的软件安装。本文研究了LoRaWAN设备在仓库、机场航站楼、体育中心、博物馆大厅等大厅大小的室内空间进行低调定位的可行性。我们研究了基于tdoa的方法,该方法需要解决LoRaWAN窄带信号时序性能差和锚点间纳秒级时钟同步两个挑战。我们提出了具有两种LoRaWAN特定技术的ILLOC系统:(1)两个锚点接收到的差分相位序列之间的相互关联来估计TDoA;(2)通过专门部署的LoRaWAN端设备实现即时同步,在检测到目标设备的传输时提供时间参考。在长隧道走廊、70 × 32m2的体育馆和110 × 70m2的非视距传播路径广泛的室内广场中,ILLOC实现的中位定位误差为6m(2个锚点),8。36米(带6个锚)和15。16m(6个锚和框架融合)。实现的准确性使ILLOC可用于区域级资产跟踪,错位检测,机场手推车管理和网络安全执法,如检测远程无线电发起的模拟攻击。为任何现成的LoRaWAN终端设备设计和评估基于tdoa的、不显眼的厅内LoRaWAN定位(ILLOC)系统。ILLOC部署多个已知位置的锚点,并根据锚点对来自终端设备的任何单个上行帧的TDoA测量来估计终端设备的位置。锚点基于软件定义无线电(sdr)来访问物理层。我们使用通用软件无线电外设(USRP)和LimeSDR对ILLOC进行原型设计,LimeSDR比USRP便宜约10倍。下面是ILLOC的设计
{"title":"ILLOC: In-Hall Localization with Standard LoRaWAN Uplink Frames","authors":"Dongfang Guo, GU Chaojie, Linshan Jiang, W. Luo, Rui Tan, Dongfang Guo, Chaojie Gu, Linshan Jiang, W. Luo","doi":"10.1145/3517245","DOIUrl":"https://doi.org/10.1145/3517245","url":null,"abstract":"LoRaWAN is a narrowband wireless technology for ubiquitous connectivity. For various applications, it is desirable to localize LoRaWAN devices based on their uplink frames that convey application data. This localization service operates in an unobtrusive manner, in that it requires no special software instrumentation to the LoRaWAN devices. This paper investigates the feasibility of unobtrusive localization for LoRaWAN devices in hall-size indoor spaces like warehouses, airport terminals, sports centers, museum halls, etc. We study the TDoA-based approach, which needs to address two challenges of poor timing performance of LoRaWAN narrowband signal and nanosecond-level clock synchronization among anchors. We propose the ILLOC system featuring two LoRaWAN-specific techniques: (1) the cross-correlation among the differential phase sequences received by two anchors to estimate TDoA and (2) the just-in-time synchronization enabled by a specially deployed LoRaWAN end device providing time reference upon detecting a target device’s transmission. In a long tunnel corridor, a 70 × 32m 2 sports hall, and a 110 × 70m 2 indoor plaza with extensive non-line-of-sight propagation paths, ILLOC achieves median localization errors of 6m (with 2 anchors), 8 . 36m (with 6 anchors), and 15 . 16m (with 6 anchors and frame fusion), respectively. The achieved accuracy makes ILLOC useful for applications including zone-level asset tracking, misplacement detection, airport trolley management, and cybersecurity enforcement like detecting impersonation attacks launched by remote radios. . the design and evaluation of a TDoA-based, unobtrusive in-hall LoRaWAN localization (ILLOC) system for any off-the-shelf LoRaWAN end devices. ILLOC deploys multiple anchors with known positions and estimates the position of an end device based on the anchors’ TDoA measurements regarding any single uplink frame from the end device. The anchors are based on software-defined radios (SDRs) to access the physical layer. We prototype ILLOC using both Universal Software Radio Peripheral (USRP) and LimeSDR that is about 10x cheaper than USRP as the anchor. The design of ILLOC the following","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"53 1","pages":"13:1-13:26"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76308819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
M3Sense: Affect-Agnostic Multitask Representation Learning Using Multimodal Wearable Sensors M3Sense:使用多模态可穿戴传感器的影响不可知多任务表示学习
Pub Date : 2022-01-01 DOI: 10.1145/3534600
Sirat Samyoun, Md. Mofijul Islam, Tariq Iqbal, J. Stankovic
Modern smartwatches or wrist wearables having multiple physiological sensing modalities have emerged as a subtle way to detect different mental health conditions, such as anxiety, emotions, and stress. However, affect detection models depending on wrist sensors data often provide poor performance due to inconsistent or inaccurate signals and scarcity of labeled data representing a condition. Although learning representations based on the physiological similarities of the affective tasks offer a possibility to solve this problem, existing approaches fail to effectively generate representations that will work across these multiple tasks. Moreover, the problem becomes more challenging due to the large domain gap among these affective applications and the discrepancies among the multiple sensing modalities. We present M3Sense, a multi-task, multimodal representation learning framework that effectively learns the affect-agnostic physiological representations from limited labeled data and uses a novel domain alignment technique to utilize the unlabeled data from the other affective tasks to accurately detect these mental health conditions using wrist sensors only. We apply M3Sense to 3 mental health applications, and quantify the achieved performance boost compared to the state-of-the-art using extensive evaluations and ablation studies on publicly available and collected datasets. Moreover, we extensively investigate what combination of tasks and modalities aids in developing a robust Multitask Learning model for affect recognition. Our analysis shows that incorporating emotion detection in the learning models degrades the performance of anxiety and stress detection, whereas stress detection helps to boost the emotion detection performance. Our results also show that M3Sense provides consistent performance across all affective tasks and available modalities and also improves the performance of representation learning models on unseen affective tasks by 5% − 60%.
具有多种生理传感模式的现代智能手表或手腕可穿戴设备已经成为一种检测不同心理健康状况(如焦虑、情绪和压力)的微妙方式。然而,依赖于手腕传感器数据的影响检测模型往往由于信号不一致或不准确以及表示条件的标记数据稀缺而性能不佳。尽管基于情感任务的生理相似性学习表征提供了解决这一问题的可能性,但现有的方法无法有效地生成跨这些多任务工作的表征。此外,由于这些情感应用之间存在较大的领域差距,并且多种感知方式之间存在差异,因此问题变得更加具有挑战性。我们提出了M3Sense,这是一个多任务、多模态表征学习框架,它可以有效地从有限的标记数据中学习情感不可知的生理表征,并使用一种新颖的域校准技术来利用来自其他情感任务的未标记数据来准确地检测这些心理健康状况,仅使用手腕传感器。我们将M3Sense应用于3种心理健康应用,并通过对公开可用和收集的数据集进行广泛的评估和消融研究,量化与最先进的技术相比,实现的性能提升。此外,我们广泛地研究了任务和模式的组合有助于开发一个强大的多任务学习模型来进行情感识别。我们的分析表明,在学习模型中加入情绪检测会降低焦虑和压力检测的性能,而压力检测有助于提高情绪检测的性能。我们的研究结果还表明,M3Sense在所有情感任务和可用模式中提供一致的性能,并且还将表示学习模型在未见过的情感任务上的性能提高了5% - 60%。
{"title":"M3Sense: Affect-Agnostic Multitask Representation Learning Using Multimodal Wearable Sensors","authors":"Sirat Samyoun, Md. Mofijul Islam, Tariq Iqbal, J. Stankovic","doi":"10.1145/3534600","DOIUrl":"https://doi.org/10.1145/3534600","url":null,"abstract":"Modern smartwatches or wrist wearables having multiple physiological sensing modalities have emerged as a subtle way to detect different mental health conditions, such as anxiety, emotions, and stress. However, affect detection models depending on wrist sensors data often provide poor performance due to inconsistent or inaccurate signals and scarcity of labeled data representing a condition. Although learning representations based on the physiological similarities of the affective tasks offer a possibility to solve this problem, existing approaches fail to effectively generate representations that will work across these multiple tasks. Moreover, the problem becomes more challenging due to the large domain gap among these affective applications and the discrepancies among the multiple sensing modalities. We present M3Sense, a multi-task, multimodal representation learning framework that effectively learns the affect-agnostic physiological representations from limited labeled data and uses a novel domain alignment technique to utilize the unlabeled data from the other affective tasks to accurately detect these mental health conditions using wrist sensors only. We apply M3Sense to 3 mental health applications, and quantify the achieved performance boost compared to the state-of-the-art using extensive evaluations and ablation studies on publicly available and collected datasets. Moreover, we extensively investigate what combination of tasks and modalities aids in developing a robust Multitask Learning model for affect recognition. Our analysis shows that incorporating emotion detection in the learning models degrades the performance of anxiety and stress detection, whereas stress detection helps to boost the emotion detection performance. Our results also show that M3Sense provides consistent performance across all affective tasks and available modalities and also improves the performance of representation learning models on unseen affective tasks by 5% − 60%.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"249 1","pages":"73:1-73:32"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79608339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
SafeGait: Safeguarding Gait-based Key Generation against Vision-based Side Channel Attack Using Generative Adversarial Network 安全防护:利用生成对抗网络保护基于步态的密钥生成免受基于视觉的侧信道攻击
Pub Date : 2022-01-01 DOI: 10.1145/3534607
Yuezhong Wu, Mahbub Hassan, Wen Hu
Recent works have shown that wearable or implanted devices attached at different locations of the body can generate an identical security key from their independent measurements of the same gait. This has created an opportunity to realize highly secured data exchange to and from critical implanted devices. In this paper, we first demonstrate that vision can be used to easily attack such gait-based key generations; an attacker with a commodity camera can measure the gait from a distance and generate a security key with any target wearable or implanted device faster than other legitimate devices worn at different locations of the subject’s body. To counter the attack, we propose a firewall to stop video-based gait measurements to proceed with key generation, but letting measurements from inertial measurement units (IMUs) that are widely used in wearable devices to measure the gait accelerations from the body to proceed. We implement the firewall concept with an IMU-vs-Video binary classifier that combines InceptionTime, an ensemble of deep Convolutional Neural Network (CNN) models for effective feature extraction from gait measurements, to a Generative Adversarial Network (GAN) that can generalize the classifier across subjects. Comprehensive evaluation with a real-world dataset shows that our proposed classifier can perform with an accuracy of 97.82%. Given that an attacker has to fool the classifier for multiple consecutive gait cycles to generate the complete key, the high single-cycle classification accuracy results in an extremely low probability for a video attacker to successfully pair with a target wearable device. More precisely, a video attacker would have one in a billion chance to successfully generate a 128-bit key, which would require the attacker to observe the subject for thousands of years.
最近的研究表明,安装在身体不同位置的可穿戴或植入设备可以通过对相同步态的独立测量产生相同的安全密钥。这为实现关键植入设备之间的高度安全数据交换创造了机会。在本文中,我们首先证明了视觉可以很容易地攻击这种基于步态的密钥代;使用普通摄像机的攻击者可以从远处测量步态,并与任何目标可穿戴或植入设备生成安全密钥,比在受试者身体不同位置佩戴的其他合法设备更快。为了对抗攻击,我们提出了一个防火墙来阻止基于视频的步态测量继续进行密钥生成,但让惯性测量单元(imu)的测量继续进行,惯性测量单元(imu)在可穿戴设备中广泛用于测量来自身体的步态加速度。我们使用IMU-vs-Video二元分类器实现防火墙概念,该分类器将InceptionTime(用于从步态测量中有效提取特征的深度卷积神经网络(CNN)模型的集合)与生成对抗网络(GAN)相结合,可以跨主题推广分类器。对真实数据集的综合评估表明,我们提出的分类器可以达到97.82%的准确率。考虑到攻击者必须欺骗分类器连续多个步态周期才能生成完整的密钥,高单周期分类精度导致视频攻击者成功配对目标可穿戴设备的概率极低。更准确地说,视频攻击者只有十亿分之一的机会成功生成128位密钥,这需要攻击者观察目标数千年。
{"title":"SafeGait: Safeguarding Gait-based Key Generation against Vision-based Side Channel Attack Using Generative Adversarial Network","authors":"Yuezhong Wu, Mahbub Hassan, Wen Hu","doi":"10.1145/3534607","DOIUrl":"https://doi.org/10.1145/3534607","url":null,"abstract":"Recent works have shown that wearable or implanted devices attached at different locations of the body can generate an identical security key from their independent measurements of the same gait. This has created an opportunity to realize highly secured data exchange to and from critical implanted devices. In this paper, we first demonstrate that vision can be used to easily attack such gait-based key generations; an attacker with a commodity camera can measure the gait from a distance and generate a security key with any target wearable or implanted device faster than other legitimate devices worn at different locations of the subject’s body. To counter the attack, we propose a firewall to stop video-based gait measurements to proceed with key generation, but letting measurements from inertial measurement units (IMUs) that are widely used in wearable devices to measure the gait accelerations from the body to proceed. We implement the firewall concept with an IMU-vs-Video binary classifier that combines InceptionTime, an ensemble of deep Convolutional Neural Network (CNN) models for effective feature extraction from gait measurements, to a Generative Adversarial Network (GAN) that can generalize the classifier across subjects. Comprehensive evaluation with a real-world dataset shows that our proposed classifier can perform with an accuracy of 97.82%. Given that an attacker has to fool the classifier for multiple consecutive gait cycles to generate the complete key, the high single-cycle classification accuracy results in an extremely low probability for a video attacker to successfully pair with a target wearable device. More precisely, a video attacker would have one in a billion chance to successfully generate a 128-bit key, which would require the attacker to observe the subject for thousands of years.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"54 1","pages":"80:1-80:27"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78266666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
EFRing: Enabling Thumb-to-Index-Finger Microgesture Interaction through Electric Field Sensing Using Single Smart Ring EFRing:使用单个智能环通过电场感应实现拇指到食指的微手势交互
Pub Date : 2022-01-01 DOI: 10.1145/3569478
Taizhou Chen, Tianpei Li, Xingyu Yang, Kening Zhu
We present EFRing, an index-finger-worn ring-form device for detecting thumb-to-index-finger (T2I) microgestures through the approach of electric-field (EF) sensing. Based on the signal change induced by the T2I motions, we proposed two machine-learning-based data-processing pipelines: one for recognizing/classifying discrete T2I microgestures, and the other for tracking continuous 1D T2I movements. Our experiments on the EFRing microgesture classification showed an average within-user accuracy of 89.5% and an average cross-user accuracy of 85.2%, for 9 discrete T2I microgestures. For the continuous tracking of 1D T2I movements, our method can achieve the mean-square error of 3.5% for the generic model and 2.3% for the personalized model. Our 1D-Fitts’-Law target-selection study shows that the proposed tracking method with EFRing is intuitive and accurate for real-time usage. Lastly, we proposed and discussed the potential applications for EFRing.
我们提出了EFRing,一种食指佩戴的环形装置,用于通过电场(EF)传感方法检测拇指到食指(T2I)的微手势。基于T2I运动引起的信号变化,我们提出了两种基于机器学习的数据处理管道:一种用于识别/分类离散的T2I微手势,另一种用于跟踪连续的1D T2I运动。我们对EFRing微手势分类的实验表明,对于9个离散的T2I微手势,用户内平均准确率为89.5%,用户间平均准确率为85.2%。对于1D T2I运动的连续跟踪,我们的方法可以实现通用模型3.5%的均方误差和个性化模型2.3%的均方误差。我们的1D-Fitts定律目标选择研究表明,所提出的EFRing跟踪方法直观、准确,适合实时使用。最后,我们提出并讨论了EFRing的潜在应用。
{"title":"EFRing: Enabling Thumb-to-Index-Finger Microgesture Interaction through Electric Field Sensing Using Single Smart Ring","authors":"Taizhou Chen, Tianpei Li, Xingyu Yang, Kening Zhu","doi":"10.1145/3569478","DOIUrl":"https://doi.org/10.1145/3569478","url":null,"abstract":"We present EFRing, an index-finger-worn ring-form device for detecting thumb-to-index-finger (T2I) microgestures through the approach of electric-field (EF) sensing. Based on the signal change induced by the T2I motions, we proposed two machine-learning-based data-processing pipelines: one for recognizing/classifying discrete T2I microgestures, and the other for tracking continuous 1D T2I movements. Our experiments on the EFRing microgesture classification showed an average within-user accuracy of 89.5% and an average cross-user accuracy of 85.2%, for 9 discrete T2I microgestures. For the continuous tracking of 1D T2I movements, our method can achieve the mean-square error of 3.5% for the generic model and 2.3% for the personalized model. Our 1D-Fitts’-Law target-selection study shows that the proposed tracking method with EFRing is intuitive and accurate for real-time usage. Lastly, we proposed and discussed the potential applications for EFRing.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"20 1","pages":"161:1-161:31"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84723972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1