首页 > 最新文献

Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.最新文献

英文 中文
BMAR: Barometric and Motion-based Alignment and Refinement for Offline Signal Synchronization across Devices 基于气压和运动的跨设备离线信号同步校准和改进
Pub Date : 2023-01-01 DOI: 10.1145/3596268
Manuel Meier, Christian Holz
Fig. 1. BMAR is a novel method to synchronize and align signals across devices without the need for specific user input, action, or explicit synchronization through wired or wireless communication (e.g., WiFi or BLE). BMAR is capable of synchronizing (a) independently recorded signals after the fact by (b) first pre-aligning recordings using air pressure as an inexpensive sensing modality that simultaneously allows us to reject non-simultaneous recordings. (c) In a second step, BMAR produces a refined signal alignment across sensor devices by cross-correlating accelerometer observations.
图1所示。BMAR是一种跨设备同步和对齐信号的新方法,无需特定的用户输入、操作或通过有线或无线通信(例如WiFi或BLE)进行显式同步。BMAR能够同步(a)独立记录的信号,通过(b)首先使用气压作为一种廉价的传感方式预校准记录,同时允许我们拒绝非同时记录。(c)在第二步中,BMAR通过相互关联加速度计的观测结果在传感器设备之间产生精确的信号对准。
{"title":"BMAR: Barometric and Motion-based Alignment and Refinement for Offline Signal Synchronization across Devices","authors":"Manuel Meier, Christian Holz","doi":"10.1145/3596268","DOIUrl":"https://doi.org/10.1145/3596268","url":null,"abstract":"Fig. 1. BMAR is a novel method to synchronize and align signals across devices without the need for specific user input, action, or explicit synchronization through wired or wireless communication (e.g., WiFi or BLE). BMAR is capable of synchronizing (a) independently recorded signals after the fact by (b) first pre-aligning recordings using air pressure as an inexpensive sensing modality that simultaneously allows us to reject non-simultaneous recordings. (c) In a second step, BMAR produces a refined signal alignment across sensor devices by cross-correlating accelerometer observations.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76638047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SmartASL: "Point-of-Care" Comprehensive ASL Interpreter Using Wearables SmartASL:使用可穿戴设备的“即时护理”综合ASL口译器
Pub Date : 2023-01-01 DOI: 10.1145/3596255
Yincheng Jin, Shibo Zhang, Yang Gao, Xuhai Xu, Seokmin Choi, Zhengxiong Li, H. J. Adler, Zhanpeng Jin
Sign language builds up an important bridge between the d/Deaf and hard-of-hearing (DHH) and hearing people. Regrettably, most hearing people face challenges in comprehending sign language, necessitating sign language translation. However, state-of-the-art wearable-based techniques mainly concentrate on recognizing manual markers (e.g
手语在聋人和听力障碍者(DHH)和正常人之间建立了一个重要的桥梁。遗憾的是,大多数听力正常的人在理解手语方面都面临着挑战,这就需要手语翻译。然而,最先进的基于可穿戴设备的技术主要集中在识别手动标记(例如
{"title":"SmartASL: \"Point-of-Care\" Comprehensive ASL Interpreter Using Wearables","authors":"Yincheng Jin, Shibo Zhang, Yang Gao, Xuhai Xu, Seokmin Choi, Zhengxiong Li, H. J. Adler, Zhanpeng Jin","doi":"10.1145/3596255","DOIUrl":"https://doi.org/10.1145/3596255","url":null,"abstract":"Sign language builds up an important bridge between the d/Deaf and hard-of-hearing (DHH) and hearing people. Regrettably, most hearing people face challenges in comprehending sign language, necessitating sign language translation. However, state-of-the-art wearable-based techniques mainly concentrate on recognizing manual markers (e.g","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75453074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
IoTBeholder: A Privacy Snooping Attack on User Habitual Behaviors from Smart Home Wi-Fi Traffic IoTBeholder:智能家居Wi-Fi流量对用户习惯行为的隐私窥探攻击
Pub Date : 2023-01-01 DOI: 10.1145/3580890
Qingsong Zou, Peng Cheng, LI Qing, Liao Ruoyu, Yucheng Huang, Jingyu Xiao, Yong Jiang, Qingsong Zou, Qing Li, Ruoyu Li, Yu-Chung Huang, Gareth Tyson, Jingyu Xiao
With the deployment of a growing number of smart home IoT devices, privacy leakage has become a growing concern. Prior work on privacy-invasive device localization, classification, and activity identification have proven the existence of various privacy leakage risks in smart home environments. However, they only demonstrate limited threats in real world due to many impractical assumptions, such as having privileged access to the user’s home network. In this paper, we identify a new end-to-end attack surface using IoTBeholder, a system that performs device localization, classification, and user activity identification. IoTBeholder can be easily run and replicated on commercial off-the-shelf (COTS) devices such as mobile phones or personal computers, enabling attackers to infer user’s habitual behaviors from smart home Wi-Fi traffic alone. We set up a testbed with 23 IoT devices for evaluation in the real world. The result shows that IoTBeholder has good device classification and device activity identification performance. In addition, IoTBeholder can infer the users’ habitual behaviors and automation rules with high accuracy and interpretability. It can even accurately predict the users’ future actions, highlighting a significant threat to user privacy that IoT vendors and users should highly concern.
随着越来越多的智能家居物联网设备的部署,隐私泄露已经成为人们越来越关注的问题。先前在侵犯隐私设备定位、分类和活动识别方面的工作已经证明了智能家居环境中存在各种隐私泄露风险。然而,由于许多不切实际的假设,例如拥有对用户家庭网络的特权访问,它们在现实世界中只展示了有限的威胁。在本文中,我们使用IoTBeholder识别一个新的端到端攻击面,IoTBeholder是一个执行设备定位、分类和用户活动识别的系统。IoTBeholder可以很容易地在商用现货(COTS)设备(如手机或个人电脑)上运行和复制,使攻击者能够仅从智能家居Wi-Fi流量推断用户的习惯行为。我们建立了一个有23个物联网设备的测试平台,用于在现实世界中进行评估。结果表明,IoTBeholder具有良好的设备分类和设备活动识别性能。此外,IoTBeholder可以推断用户的习惯行为和自动化规则,具有较高的准确性和可解释性。它甚至可以准确预测用户未来的行为,突出了物联网供应商和用户应该高度关注的对用户隐私的重大威胁。
{"title":"IoTBeholder: A Privacy Snooping Attack on User Habitual Behaviors from Smart Home Wi-Fi Traffic","authors":"Qingsong Zou, Peng Cheng, LI Qing, Liao Ruoyu, Yucheng Huang, Jingyu Xiao, Yong Jiang, Qingsong Zou, Qing Li, Ruoyu Li, Yu-Chung Huang, Gareth Tyson, Jingyu Xiao","doi":"10.1145/3580890","DOIUrl":"https://doi.org/10.1145/3580890","url":null,"abstract":"With the deployment of a growing number of smart home IoT devices, privacy leakage has become a growing concern. Prior work on privacy-invasive device localization, classification, and activity identification have proven the existence of various privacy leakage risks in smart home environments. However, they only demonstrate limited threats in real world due to many impractical assumptions, such as having privileged access to the user’s home network. In this paper, we identify a new end-to-end attack surface using IoTBeholder, a system that performs device localization, classification, and user activity identification. IoTBeholder can be easily run and replicated on commercial off-the-shelf (COTS) devices such as mobile phones or personal computers, enabling attackers to infer user’s habitual behaviors from smart home Wi-Fi traffic alone. We set up a testbed with 23 IoT devices for evaluation in the real world. The result shows that IoTBeholder has good device classification and device activity identification performance. In addition, IoTBeholder can infer the users’ habitual behaviors and automation rules with high accuracy and interpretability. It can even accurately predict the users’ future actions, highlighting a significant threat to user privacy that IoT vendors and users should highly concern.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79377553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
AMIR: Active Multimodal Interaction Recognition from Video and Network Traffic in Connected Environments AMIR:连接环境中视频和网络流量的主动多模态交互识别
Pub Date : 2023-01-01 DOI: 10.1145/3580818
Shinan Liu
Activity recognition using video data is widely adopted for elder care, monitoring for safety and security, and home automation. Unfortunately, using video data as the basis for activity recognition can be brittle, since models trained on video are often not robust to certain environmental changes, such as camera angle and lighting changes. There has been a proliferation of network-connected devices in home environments. Interactions with these smart devices are associated with network activity, making network data a potential source for recognizing these device interactions. This paper advocates for the synthesis of video and network data for robust interaction recognition in connected environments. We consider machine learning-based approaches for activity recognition, where each labeled activity is associated with both a video capture and an accompanying network traffic trace. We develop a simple but effective framework AMIR (Active Multimodal Interaction Recognition) 1 that trains independent models for video and network activity recognition respectively, and subsequently combines the predictions from these models using a meta-learning framework. Whether in lab or at home, this approach reduces the amount of “paired” demonstrations needed to perform accurate activity recognition, where both network and video data are collected simultaneously. Specifically, the method we have developed requires up to 70.83% fewer samples to achieve 85% F1 score than random data collection, and improves accuracy by 17.76% given the same number of samples. CCS
使用视频数据的活动识别被广泛应用于老年人护理、安全监控和家庭自动化。不幸的是,使用视频数据作为活动识别的基础可能是脆弱的,因为在视频上训练的模型通常对某些环境变化(如摄像机角度和照明变化)不太健壮。家庭环境中联网设备的数量激增。与这些智能设备的交互与网络活动相关联,使网络数据成为识别这些设备交互的潜在来源。本文提倡视频和网络数据的综合,以实现互联环境下的鲁棒交互识别。我们考虑基于机器学习的活动识别方法,其中每个标记的活动都与视频捕获和伴随的网络流量跟踪相关联。我们开发了一个简单但有效的框架AMIR(主动多模态交互识别)1,它分别训练视频和网络活动识别的独立模型,随后使用元学习框架将这些模型的预测结合起来。无论是在实验室还是在家里,这种方法减少了执行准确活动识别所需的“配对”演示的数量,其中同时收集网络和视频数据。具体来说,我们开发的方法比随机数据收集最多需要减少70.83%的样本来达到85%的F1分数,在相同数量的样本下,准确率提高了17.76%。CCS
{"title":"AMIR: Active Multimodal Interaction Recognition from Video and Network Traffic in Connected Environments","authors":"Shinan Liu","doi":"10.1145/3580818","DOIUrl":"https://doi.org/10.1145/3580818","url":null,"abstract":"Activity recognition using video data is widely adopted for elder care, monitoring for safety and security, and home automation. Unfortunately, using video data as the basis for activity recognition can be brittle, since models trained on video are often not robust to certain environmental changes, such as camera angle and lighting changes. There has been a proliferation of network-connected devices in home environments. Interactions with these smart devices are associated with network activity, making network data a potential source for recognizing these device interactions. This paper advocates for the synthesis of video and network data for robust interaction recognition in connected environments. We consider machine learning-based approaches for activity recognition, where each labeled activity is associated with both a video capture and an accompanying network traffic trace. We develop a simple but effective framework AMIR (Active Multimodal Interaction Recognition) 1 that trains independent models for video and network activity recognition respectively, and subsequently combines the predictions from these models using a meta-learning framework. Whether in lab or at home, this approach reduces the amount of “paired” demonstrations needed to perform accurate activity recognition, where both network and video data are collected simultaneously. Specifically, the method we have developed requires up to 70.83% fewer samples to achieve 85% F1 score than random data collection, and improves accuracy by 17.76% given the same number of samples. CCS","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73056372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
CircuitGIue: A Software Configurable Converter for Interconnecting Multiple Heterogeneous Electronic Components 电路:一种软件可配置的转换器,用于互连多个异构电子元件
Pub Date : 2023-01-01 DOI: 10.1145/3596265
M. Lambrichts, Raf Ramakers, S. Hodges, J. Devine, L. Underwood, J. Finney
{"title":"CircuitGIue: A Software Configurable Converter for Interconnecting Multiple Heterogeneous Electronic Components","authors":"M. Lambrichts, Raf Ramakers, S. Hodges, J. Devine, L. Underwood, J. Finney","doi":"10.1145/3596265","DOIUrl":"https://doi.org/10.1145/3596265","url":null,"abstract":"","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80487777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MagSound: Magnetic Field Assisted Wireless Earphone Tracking MagSound:磁场辅助无线耳机跟踪
Pub Date : 2023-01-01 DOI: 10.1145/3580889
Lihao Wang, Wei Wang, Haipeng Dai, Shizhe Liu
Wireless earphones are pervasive acoustic sensing platforms that can be used for many applications such as motion tracking and handwriting input. However, wireless earphones suffer clock offset between the connected smart devices, which would accumulate error rapidly over time. Moreover, compared with smartphone and voice assistants, the acoustic signal transmitted by wireless earphone is much weaker due to the poor frequency response. In this paper, we propose MagSound, which uses the built-in magnets to improve the tracking and acoustic sensing performance of Commercial-Off-The-Shelf (COTS) earphones. Leveraging magnetic field strength, MagSound can predict the position of wireless earphones free from clock offset, which can be used to re-calibrate the acoustic tracking. Further, the fusion of the two modalities mitigates the accumulated clock offset and multipath effect. Besides, to increase the robustness to noise, MagSound employs finely designed Orthogonal Frequency-Division Multiplexing (OFDM) ranging signals. We implement a prototype of MagSound on COTS and perform experiments for tracking and handwriting input. Results demonstrate that MagSound maintains millimeter-level error in 2D tracking, and improves the handwriting recognition accuracy by 49.81%. We believe that MagSound can contribute to practical applications of wireless earphones-based sensing.
无线耳机是一种普遍的声学传感平台,可用于许多应用,如运动跟踪和手写输入。然而,无线耳机在连接的智能设备之间存在时钟偏移,这将随着时间的推移迅速积累误差。此外,与智能手机和语音助手相比,无线耳机由于频率响应差,传输的声信号要弱得多。在本文中,我们提出了MagSound,它使用内置磁铁来提高商用现货(COTS)耳机的跟踪和声传感性能。利用磁场强度,MagSound可以预测无线耳机的位置,不受时钟偏移的影响,可以用来重新校准声音跟踪。此外,两种模式的融合减轻了累积时钟偏移和多径效应。此外,MagSound采用了精心设计的正交频分复用(OFDM)测距信号,增强了对噪声的鲁棒性。我们在COTS上实现了MagSound的原型,并进行了跟踪和手写输入的实验。结果表明,MagSound在二维跟踪中保持了毫米级的误差,将手写识别准确率提高了49.81%。我们相信MagSound可以为无线耳机传感的实际应用做出贡献。
{"title":"MagSound: Magnetic Field Assisted Wireless Earphone Tracking","authors":"Lihao Wang, Wei Wang, Haipeng Dai, Shizhe Liu","doi":"10.1145/3580889","DOIUrl":"https://doi.org/10.1145/3580889","url":null,"abstract":"Wireless earphones are pervasive acoustic sensing platforms that can be used for many applications such as motion tracking and handwriting input. However, wireless earphones suffer clock offset between the connected smart devices, which would accumulate error rapidly over time. Moreover, compared with smartphone and voice assistants, the acoustic signal transmitted by wireless earphone is much weaker due to the poor frequency response. In this paper, we propose MagSound, which uses the built-in magnets to improve the tracking and acoustic sensing performance of Commercial-Off-The-Shelf (COTS) earphones. Leveraging magnetic field strength, MagSound can predict the position of wireless earphones free from clock offset, which can be used to re-calibrate the acoustic tracking. Further, the fusion of the two modalities mitigates the accumulated clock offset and multipath effect. Besides, to increase the robustness to noise, MagSound employs finely designed Orthogonal Frequency-Division Multiplexing (OFDM) ranging signals. We implement a prototype of MagSound on COTS and perform experiments for tracking and handwriting input. Results demonstrate that MagSound maintains millimeter-level error in 2D tracking, and improves the handwriting recognition accuracy by 49.81%. We believe that MagSound can contribute to practical applications of wireless earphones-based sensing.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81588437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HyWay: Enabling Mingling in the Hybrid World HyWay:在混合世界中实现混合
Pub Date : 2023-01-01 DOI: 10.1145/3596235
Harsh Vijay, Saumay Pushp, Amish Mittal, Praveen Gupta, Meghna Gupta, Sirish Gambhira, Shivang Chopra, Mayank Baranwal, Arshia Arya, Ajay Manchepalli, V. Padmanabhan
We present HyWay , short for “ Hy brid Hall way ”, to enable mingling and informal interactions among physical and virtual users, in casual spaces and settings, such as office water cooler areas, conference hallways, trade show floors, and more. We call out how the hybrid and unstructured (or semi-structured) nature of such settings set these apart from the all-virtual and/or structured settings considered in prior work. Key to the design of HyWay is bridging the awareness gap between physical and virtual users, and providing the virtual users the same agency as physical users. To this end, we have designed HyWay to incorporate reciprocity (users can see and hear others only if they can be seen and heard), porosity (conversations in physical space are porous and not within airtight compartments), and agency (the ability for users to seamlessly move between conversations). We present our implementation of HyWay and the user survey findings from multiple deployments in unstructured settings (e.g., social gatherings), and semi-structured ones (e.g., a poster event). Results from these deployments show that HyWay enables effective mingling between physical and virtual users. CCS Concepts
HyWay是“Hy bridge Hall way”的缩写,可以在办公饮水机区域、会议走廊、贸易展览大厅等休闲空间和环境中实现物理和虚拟用户之间的混合和非正式互动。我们指出这种设置的混合和非结构化(或半结构化)性质如何将这些设置与之前工作中考虑的全虚拟和/或结构化设置区分开来。HyWay设计的关键是弥合物理用户和虚拟用户之间的意识差距,并为虚拟用户提供与物理用户相同的代理。为此,我们设计了HyWay,将互惠性(用户只有在能够看到和听到他人的情况下才能看到和听到他人的声音)、孔隙性(物理空间中的对话是多孔的,而不是在密闭的隔间内)和代理(用户在对话之间无缝移动的能力)结合起来。我们展示了HyWay的实现以及在非结构化环境(例如,社交聚会)和半结构化环境(例如,海报活动)中的多个部署的用户调查结果。这些部署的结果表明,HyWay可以有效地混合物理用户和虚拟用户。CCS的概念
{"title":"HyWay: Enabling Mingling in the Hybrid World","authors":"Harsh Vijay, Saumay Pushp, Amish Mittal, Praveen Gupta, Meghna Gupta, Sirish Gambhira, Shivang Chopra, Mayank Baranwal, Arshia Arya, Ajay Manchepalli, V. Padmanabhan","doi":"10.1145/3596235","DOIUrl":"https://doi.org/10.1145/3596235","url":null,"abstract":"We present HyWay , short for “ Hy brid Hall way ”, to enable mingling and informal interactions among physical and virtual users, in casual spaces and settings, such as office water cooler areas, conference hallways, trade show floors, and more. We call out how the hybrid and unstructured (or semi-structured) nature of such settings set these apart from the all-virtual and/or structured settings considered in prior work. Key to the design of HyWay is bridging the awareness gap between physical and virtual users, and providing the virtual users the same agency as physical users. To this end, we have designed HyWay to incorporate reciprocity (users can see and hear others only if they can be seen and heard), porosity (conversations in physical space are porous and not within airtight compartments), and agency (the ability for users to seamlessly move between conversations). We present our implementation of HyWay and the user survey findings from multiple deployments in unstructured settings (e.g., social gatherings), and semi-structured ones (e.g., a poster event). Results from these deployments show that HyWay enables effective mingling between physical and virtual users. CCS Concepts","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83828199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PrintShear: Shear Input Based on Fingerprint Deformation PrintShear:基于指纹变形的剪切输入
Pub Date : 2023-01-01 DOI: 10.1145/3596257
Jinyang Yu, Jianjiang Feng, Jie Zhou
Most touch-based input devices, such as touchscreens and touchpads, capture low-resolution capacitive images when a finger touches the device’s surface. These devices only output the two-dimensional (2D) positions of contacting points, which are insufficient for complex control tasks, such as the manipulation of 3D objects. To expand the modalities of touch inputs, researchers have proposed a variety of techniques, including finger poses, chording gestures, touch pressure, etc. With the rapid development of fingerprint sensing technology, especially under-screen fingerprint sensors, it has become possible to generate input commands to control multiple degrees of freedom (DOF) at a time using fingerprint images. In this paper, we propose PrintShear, a shear input technique based on fingerprint deformation. Lateral, longitudinal and rotational deformations are extracted from fingerprint images and mapped to 3DOF control commands. Further DOF expansion can be achieved through recognition of the contact region of the touching finger. We conducted a 12-person user study to evaluate the performance of PrintShear on 3D docking tasks. Comparisons with other input methods demonstrated the superiority of our approach. Specifically, a 19.79% reduction in completion time was achieved compared with conventional touch input in a full 6DOF 3D object manipulation task.
大多数基于触摸的输入设备,如触摸屏和触摸板,在手指触摸设备表面时捕获低分辨率的电容图像。这些设备只能输出接触点的二维(2D)位置,这对于复杂的控制任务是不够的,比如对3D对象的操作。为了扩展触摸输入的模式,研究人员提出了各种技术,包括手指姿势,和弦手势,触摸压力等。随着指纹传感技术,特别是屏下指纹传感器的快速发展,利用指纹图像生成控制多个自由度的输入命令已经成为可能。本文提出了一种基于指纹变形的剪切输入技术PrintShear。从指纹图像中提取横向、纵向和旋转变形,并映射到3DOF控制命令。进一步的自由度扩展可以通过识别触摸手指的接触区域来实现。我们进行了一个12人的用户研究,以评估PrintShear在3D对接任务中的性能。与其他输入法的比较表明了我们的方法的优越性。具体来说,在一个完整的6DOF 3D物体操作任务中,与传统的触摸输入相比,完成时间减少了19.79%。
{"title":"PrintShear: Shear Input Based on Fingerprint Deformation","authors":"Jinyang Yu, Jianjiang Feng, Jie Zhou","doi":"10.1145/3596257","DOIUrl":"https://doi.org/10.1145/3596257","url":null,"abstract":"Most touch-based input devices, such as touchscreens and touchpads, capture low-resolution capacitive images when a finger touches the device’s surface. These devices only output the two-dimensional (2D) positions of contacting points, which are insufficient for complex control tasks, such as the manipulation of 3D objects. To expand the modalities of touch inputs, researchers have proposed a variety of techniques, including finger poses, chording gestures, touch pressure, etc. With the rapid development of fingerprint sensing technology, especially under-screen fingerprint sensors, it has become possible to generate input commands to control multiple degrees of freedom (DOF) at a time using fingerprint images. In this paper, we propose PrintShear, a shear input technique based on fingerprint deformation. Lateral, longitudinal and rotational deformations are extracted from fingerprint images and mapped to 3DOF control commands. Further DOF expansion can be achieved through recognition of the contact region of the touching finger. We conducted a 12-person user study to evaluate the performance of PrintShear on 3D docking tasks. Comparisons with other input methods demonstrated the superiority of our approach. Specifically, a 19.79% reduction in completion time was achieved compared with conventional touch input in a full 6DOF 3D object manipulation task.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76922116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
mSilent: Towards General Corpus Silent Speech Recognition Using COTS mmWave Radar 基于COTS毫米波雷达的通用语料库无声语音识别
Pub Date : 2023-01-01 DOI: 10.1145/3580838
Shangcui Zeng, Hao Wan, Shuyu Shi, Wei Wang
Silent speech recognition (SSR) allows users to speak to the device without making a sound, avoiding being overheard or disturbing others. Compared to the video-based approach, wireless signal-based SSR can work when the user is wearing a mask and has fewer privacy concerns. However, previous wireless-based systems are still far from well-studied, e.g. they are only evaluated in corpus with highly limited size, making them only feasible for interaction with dozens of deterministic commands. In this paper, we present mSilent, a millimeter-wave (mmWave) based SSR system that can work in the general corpus containing thousands of daily conversation sentences. With the strong recognition capability, mSilent not only supports the more complex interaction with assistants, but also enables more general applications in daily life such as communication and input. To extract fine-grained articulatory features, we build a signal processing pipeline that uses a clustering-selection algorithm to separate articulatory gestures and generates a multi-scale detrended spectrogram (MSDS). To handle the complexity of the general corpus, we design an end-to-end deep neural network that consists of a multi-branch convolutional front-end and a Transformer-based sequence-to-sequence back-end. We collect a general corpus dataset of 1,000 daily conversation sentences that contains 21K samples of bi-modality data (mmWave and video). Our evaluation shows that mSilent achieves a 9.5% average word error rate (WER) at a distance of 1.5m, which is comparable to the performance of the state-of-the-art video-based approach. We also explore deploying mSilent in two typical scenarios of text entry and in-car assistant, and the less than 6% average WER demonstrates the potential of mSilent in general daily applications. CCS Concepts: • Human-centered computing → Ubiquitous and mobile computing systems and tools ;
无声语音识别(SSR)允许用户在不发出声音的情况下对设备说话,避免被偷听或打扰他人。与基于视频的方法相比,基于无线信号的SSR可以在用户戴着面具的情况下工作,并且隐私问题较少。然而,以前的基于无线的系统还远远没有得到充分的研究,例如,它们只在高度有限的语料库中进行评估,这使得它们只能与几十个确定性命令进行交互。在本文中,我们提出了一种基于毫米波(mmWave)的SSR系统mSilent,该系统可以在包含数千个日常会话句子的一般语料库中工作。mSilent具有强大的识别能力,不仅可以支持与助手更复杂的交互,还可以实现日常生活中更通用的应用,如交流和输入。为了提取细粒度的发音特征,我们构建了一个信号处理管道,该管道使用聚类选择算法分离发音手势并生成多尺度去趋势谱图(MSDS)。为了处理通用语料库的复杂性,我们设计了一个端到端的深度神经网络,该网络由多分支卷积前端和基于transformer的序列到序列后端组成。我们收集了1000个日常会话句子的通用语料库数据集,其中包含21K双模态数据样本(毫米波和视频)。我们的评估表明,mSilent在1.5米的距离上实现了9.5%的平均单词错误率(WER),这与最先进的基于视频的方法的性能相当。我们还探索了在文本输入和车载助手这两种典型场景中部署mSilent的可能性,低于6%的平均WER显示了mSilent在一般日常应用程序中的潜力。CCS概念:•以人为中心的计算→无处不在的移动计算系统和工具;
{"title":"mSilent: Towards General Corpus Silent Speech Recognition Using COTS mmWave Radar","authors":"Shangcui Zeng, Hao Wan, Shuyu Shi, Wei Wang","doi":"10.1145/3580838","DOIUrl":"https://doi.org/10.1145/3580838","url":null,"abstract":"Silent speech recognition (SSR) allows users to speak to the device without making a sound, avoiding being overheard or disturbing others. Compared to the video-based approach, wireless signal-based SSR can work when the user is wearing a mask and has fewer privacy concerns. However, previous wireless-based systems are still far from well-studied, e.g. they are only evaluated in corpus with highly limited size, making them only feasible for interaction with dozens of deterministic commands. In this paper, we present mSilent, a millimeter-wave (mmWave) based SSR system that can work in the general corpus containing thousands of daily conversation sentences. With the strong recognition capability, mSilent not only supports the more complex interaction with assistants, but also enables more general applications in daily life such as communication and input. To extract fine-grained articulatory features, we build a signal processing pipeline that uses a clustering-selection algorithm to separate articulatory gestures and generates a multi-scale detrended spectrogram (MSDS). To handle the complexity of the general corpus, we design an end-to-end deep neural network that consists of a multi-branch convolutional front-end and a Transformer-based sequence-to-sequence back-end. We collect a general corpus dataset of 1,000 daily conversation sentences that contains 21K samples of bi-modality data (mmWave and video). Our evaluation shows that mSilent achieves a 9.5% average word error rate (WER) at a distance of 1.5m, which is comparable to the performance of the state-of-the-art video-based approach. We also explore deploying mSilent in two typical scenarios of text entry and in-car assistant, and the less than 6% average WER demonstrates the potential of mSilent in general daily applications. CCS Concepts: • Human-centered computing → Ubiquitous and mobile computing systems and tools ;","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79988224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Eggly: Designing Mobile Augmented Reality Neurofeedback Training Games for Children with Autism Spectrum Disorder 为自闭症谱系障碍儿童设计移动增强现实神经反馈训练游戏
Pub Date : 2023-01-01 DOI: 10.1145/3596251
Yue Lyu, Huan Zhang, Keiko Katsuragawa, J. Zhao
and a tablet. Eggly uses novel augmented reality (AR) techniques to offer engagement and personalization, enhancing their training experience. We conducted two field studies (a single-session study and a three-week multi-session study) with a total of five autistic children to assess Eggly in practice at a special education center. Both quantitative and qualitative results indicate the effectiveness of the approach as well as contribute to the design knowledge of creating mobile AR NFT games.
还有一个平板电脑。Eggly使用新颖的增强现实(AR)技术来提供参与和个性化,增强他们的培训体验。我们对五名自闭症儿童进行了两次实地研究(一次单次研究和一次为期三周的多次研究),以评估Eggly在特殊教育中心的实践。定量和定性结果都表明了该方法的有效性,并有助于创造手机AR NFT游戏的设计知识。
{"title":"Eggly: Designing Mobile Augmented Reality Neurofeedback Training Games for Children with Autism Spectrum Disorder","authors":"Yue Lyu, Huan Zhang, Keiko Katsuragawa, J. Zhao","doi":"10.1145/3596251","DOIUrl":"https://doi.org/10.1145/3596251","url":null,"abstract":"and a tablet. Eggly uses novel augmented reality (AR) techniques to offer engagement and personalization, enhancing their training experience. We conducted two field studies (a single-session study and a three-week multi-session study) with a total of five autistic children to assess Eggly in practice at a special education center. Both quantitative and qualitative results indicate the effectiveness of the approach as well as contribute to the design knowledge of creating mobile AR NFT games.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72554583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1