Arvind Pillai, Trevor Cohen, Dror Ben-Zeev, Subigya Nepal, Weichen Wang, M. Nemesure, Michael Heinz, George Price, D. Lekkas, Amanda C. Collins, Tess Z Griffin, Benjamin Buck, S. Preum, Dror Nicholas Jacobson
Speech-based diaries from mobile phones can capture paralinguistic patterns that help detect mental illness symptoms such as suicidal ideation. However, previous studies have primarily evaluated machine learning models on a single dataset, making their performance unknown under distribution shifts. In this paper, we investigate the generalizability of speech-based suicidal ideation detection using mobile phones through cross-dataset experiments using four datasets with N=786 individuals experiencing major depressive disorder, auditory verbal hallucinations, persecutory thoughts, and students with suicidal thoughts. Our results show that machine and deep learning methods generalize poorly in many cases. Thus, we evaluate unsupervised domain adaptation (UDA) and semi-supervised domain adaptation (SSDA) to mitigate performance decreases owing to distribution shifts. While SSDA approaches showed superior performance, they are often ineffective, requiring large target datasets with limited labels for adversarial and contrastive training. Therefore, we propose sinusoidal similarity sub-sampling (S3), a method that selects optimal source subsets for the target domain by computing pair-wise scores using sinusoids. Compared to prior approaches, S3 does not use labeled target data or transform features. Fine-tuning using S3 improves the cross-dataset performance of deep models across the datasets, thus having implications in ubiquitous technology, mental health, and machine learning.
{"title":"Investigating Generalizability of Speech-based Suicidal Ideation Detection Using Mobile Phones","authors":"Arvind Pillai, Trevor Cohen, Dror Ben-Zeev, Subigya Nepal, Weichen Wang, M. Nemesure, Michael Heinz, George Price, D. Lekkas, Amanda C. Collins, Tess Z Griffin, Benjamin Buck, S. Preum, Dror Nicholas Jacobson","doi":"10.1145/3631452","DOIUrl":"https://doi.org/10.1145/3631452","url":null,"abstract":"Speech-based diaries from mobile phones can capture paralinguistic patterns that help detect mental illness symptoms such as suicidal ideation. However, previous studies have primarily evaluated machine learning models on a single dataset, making their performance unknown under distribution shifts. In this paper, we investigate the generalizability of speech-based suicidal ideation detection using mobile phones through cross-dataset experiments using four datasets with N=786 individuals experiencing major depressive disorder, auditory verbal hallucinations, persecutory thoughts, and students with suicidal thoughts. Our results show that machine and deep learning methods generalize poorly in many cases. Thus, we evaluate unsupervised domain adaptation (UDA) and semi-supervised domain adaptation (SSDA) to mitigate performance decreases owing to distribution shifts. While SSDA approaches showed superior performance, they are often ineffective, requiring large target datasets with limited labels for adversarial and contrastive training. Therefore, we propose sinusoidal similarity sub-sampling (S3), a method that selects optimal source subsets for the target domain by computing pair-wise scores using sinusoids. Compared to prior approaches, S3 does not use labeled target data or transform features. Fine-tuning using S3 improves the cross-dataset performance of deep models across the datasets, thus having implications in ubiquitous technology, mental health, and machine learning.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Speech enhancement is regarded as the key to the quality of digital communication and is gaining increasing attention in the research field of audio processing. In this paper, we present EarSE, the first robust, hands-free, multi-modal speech enhancement solution using commercial off-the-shelf headphones. The key idea of EarSE is a novel hardware setting---leveraging the form factor of headphones equipped with a boom microphone to establish a stable acoustic sensing field across the user's face. Furthermore, we designed a sensing methodology based on Frequency-Modulated Continuous-Wave, which is an ultrasonic modality sensitive to capture subtle facial articulatory gestures of users when speaking. Moreover, we design a fully attention-based deep neural network to self-adaptively solve the user diversity problem by introducing the Vision Transformer network. We enhance the collaboration between the speech and ultrasonic modalities using a multi-head attention mechanism and a Factorized Bilinear Pooling gate. Extensive experiments demonstrate that EarSE achieves remarkable performance as increasing SiSDR by 14.61 dB and reducing the word error rate of user speech recognition by 22.45--66.41% in real-world application. EarSE not only outperforms seven baselines by 38.0% in SiSNR, 12.4% in STOI, and 20.5% in PESQ on average but also maintains practicality.
{"title":"EarSE","authors":"Di Duan, Yongliang Chen, Weitao Xu, Tianxing Li","doi":"10.1145/3631447","DOIUrl":"https://doi.org/10.1145/3631447","url":null,"abstract":"Speech enhancement is regarded as the key to the quality of digital communication and is gaining increasing attention in the research field of audio processing. In this paper, we present EarSE, the first robust, hands-free, multi-modal speech enhancement solution using commercial off-the-shelf headphones. The key idea of EarSE is a novel hardware setting---leveraging the form factor of headphones equipped with a boom microphone to establish a stable acoustic sensing field across the user's face. Furthermore, we designed a sensing methodology based on Frequency-Modulated Continuous-Wave, which is an ultrasonic modality sensitive to capture subtle facial articulatory gestures of users when speaking. Moreover, we design a fully attention-based deep neural network to self-adaptively solve the user diversity problem by introducing the Vision Transformer network. We enhance the collaboration between the speech and ultrasonic modalities using a multi-head attention mechanism and a Factorized Bilinear Pooling gate. Extensive experiments demonstrate that EarSE achieves remarkable performance as increasing SiSDR by 14.61 dB and reducing the word error rate of user speech recognition by 22.45--66.41% in real-world application. EarSE not only outperforms seven baselines by 38.0% in SiSNR, 12.4% in STOI, and 20.5% in PESQ on average but also maintains practicality.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuning Su, Yuhua Jin, Zhengqing Wang, Yonghao Shi, Da-Yuan Huang, Teng Han, Xing-Dong Yang
We investigate the feasibility of a vibrotactile device that is both battery-free and electronic-free. Our approach leverages lasers as a wireless power transfer and haptic control mechanism, which can drive small actuators commonly used in AR/VR and mobile applications with DC or AC signals. To validate the feasibility of our method, we developed a proof-of-concept prototype that includes low-cost eccentric rotating mass (ERM) motors and linear resonant actuators (LRAs) connected to photovoltaic (PV) cells. This prototype enabled us to capture laser energy from any distance across a room and analyze the impact of critical parameters on the effectiveness of our approach. Through a user study, testing 16 different vibration patterns rendered using either a single motor or two motors, we demonstrate the effectiveness of our approach in generating vibration patterns of comparable quality to a baseline, which rendered the patterns using a signal generator.
{"title":"Laser-Powered Vibrotactile Rendering","authors":"Yuning Su, Yuhua Jin, Zhengqing Wang, Yonghao Shi, Da-Yuan Huang, Teng Han, Xing-Dong Yang","doi":"10.1145/3631449","DOIUrl":"https://doi.org/10.1145/3631449","url":null,"abstract":"We investigate the feasibility of a vibrotactile device that is both battery-free and electronic-free. Our approach leverages lasers as a wireless power transfer and haptic control mechanism, which can drive small actuators commonly used in AR/VR and mobile applications with DC or AC signals. To validate the feasibility of our method, we developed a proof-of-concept prototype that includes low-cost eccentric rotating mass (ERM) motors and linear resonant actuators (LRAs) connected to photovoltaic (PV) cells. This prototype enabled us to capture laser energy from any distance across a room and analyze the impact of critical parameters on the effectiveness of our approach. Through a user study, testing 16 different vibration patterns rendered using either a single motor or two motors, we demonstrate the effectiveness of our approach in generating vibration patterns of comparable quality to a baseline, which rendered the patterns using a signal generator.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark Colley, Oliver Speidel, Jan Strohbeck, J. Rixen, Janina Belz, Enrico Rukzio
Automated vehicles are expected to improve safety, mobility, and inclusion. User acceptance is required for the successful introduction of this technology. One essential prerequisite for acceptance is appropriately trusting the vehicle's capabilities. System transparency via visualizing internal information could calibrate this trust by enabling the surveillance of the vehicle's detection and prediction capabilities, including its failures. Additionally, concurrently increased situation awareness could improve take-overs in case of emergency. This work reports the results of two online comparative video-based studies on visualizing prediction and maneuver-planning information. Effects on trust, cognitive load, and situation awareness were measured using a simulation (N=280) and state-of-the-art road user prediction and maneuver planning on a pre-recorded real-world video using a real prototype (N=238). Results show that color conveys uncertainty best, that the planned trajectory increased trust, and that the visualization of other predicted trajectories improved perceived safety.
{"title":"Effects of Uncertain Trajectory Prediction Visualization in Highly Automated Vehicles on Trust, Situation Awareness, and Cognitive Load","authors":"Mark Colley, Oliver Speidel, Jan Strohbeck, J. Rixen, Janina Belz, Enrico Rukzio","doi":"10.1145/3631408","DOIUrl":"https://doi.org/10.1145/3631408","url":null,"abstract":"Automated vehicles are expected to improve safety, mobility, and inclusion. User acceptance is required for the successful introduction of this technology. One essential prerequisite for acceptance is appropriately trusting the vehicle's capabilities. System transparency via visualizing internal information could calibrate this trust by enabling the surveillance of the vehicle's detection and prediction capabilities, including its failures. Additionally, concurrently increased situation awareness could improve take-overs in case of emergency. This work reports the results of two online comparative video-based studies on visualizing prediction and maneuver-planning information. Effects on trust, cognitive load, and situation awareness were measured using a simulation (N=280) and state-of-the-art road user prediction and maneuver planning on a pre-recorded real-world video using a real prototype (N=238). Results show that color conveys uncertainty best, that the planned trajectory increased trust, and that the visualization of other predicted trajectories improved perceived safety.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Duo Zhang, Xusheng Zhang, Yaxiong Xie, Fusang Zhang, Xuanzhi Wang, Yang Li, Daqing Zhang
Millimeter wave (mmWave) radar excels in accurately estimating the distance, speed, and angle of the signal reflectors relative to the radar. However, for diverse sensing applications reliant on radar's tracking capability, these estimates must be transformed from radar to room coordinates. This transformation hinges on the mmWave radar's location attribute, encompassing its position and orientation in room coordinates. Traditional outdoor calibration solutions for autonomous driving utilize corner reflectors as static reference points to derive the location attribute. When deployed in the indoor environment, it is challenging, even for the mmWave radar with GHz bandwidth and a large antenna array, to separate the static reference points from other multipath reflectors. To tackle the static multipath, we propose to deploy a moving reference point (a moving robot) to fully harness the velocity resolution of mmWave radar. Specifically, we select a SLAM-capable robot to accurately obtain its locations under room coordinates during motion, without requiring human intervention. Accurately pairing the locations of the robot under two coordinate systems requires tight synchronization between the mmWave radar and the robot. We therefore propose a novel trajectory correspondence based calibration algorithm that takes the estimated trajectories of two systems as input, decoupling the operations of two systems to the maximum. Extensive experimental results demonstrate that the proposed calibration solution exhibits very high accuracy (1.74 cm and 0.43° accuracy for location and orientation respectively) and could ensure outstanding performance in three representative applications: fall detection, point cloud fusion, and long-distance human tracking.
毫米波(mmWave)雷达在准确估计信号反射器相对于雷达的距离、速度和角度方面表现出色。然而,对于依赖雷达跟踪能力的各种传感应用来说,这些估计值必须从雷达转换到室内坐标。这种转换取决于毫米波雷达的位置属性,包括其在房间坐标中的位置和方向。传统的自动驾驶室外校准解决方案利用角反射器作为静态参考点来推导位置属性。在室内环境中部署时,即使是具有 GHz 带宽和大型天线阵列的毫米波雷达,要将静态参考点与其他多径反射体分开也是一项挑战。为了解决静态多径问题,我们建议部署一个移动参考点(移动机器人),以充分利用毫米波雷达的速度分辨率。具体来说,我们选择一个具有 SLAM 功能的机器人,以便在运动过程中根据房间坐标准确获取其位置,而无需人工干预。在两个坐标系下精确配对机器人的位置需要毫米波雷达和机器人之间的紧密同步。因此,我们提出了一种基于轨迹对应的新型校准算法,该算法将两个系统的估计轨迹作为输入,最大限度地解耦了两个系统的操作。广泛的实验结果表明,所提出的校准解决方案具有极高的精度(定位和定向精度分别为 1.74 厘米和 0.43°),可确保在跌倒检测、点云融合和远距离人体跟踪这三个具有代表性的应用中发挥出色的性能。
{"title":"LoCal","authors":"Duo Zhang, Xusheng Zhang, Yaxiong Xie, Fusang Zhang, Xuanzhi Wang, Yang Li, Daqing Zhang","doi":"10.1145/3631436","DOIUrl":"https://doi.org/10.1145/3631436","url":null,"abstract":"Millimeter wave (mmWave) radar excels in accurately estimating the distance, speed, and angle of the signal reflectors relative to the radar. However, for diverse sensing applications reliant on radar's tracking capability, these estimates must be transformed from radar to room coordinates. This transformation hinges on the mmWave radar's location attribute, encompassing its position and orientation in room coordinates. Traditional outdoor calibration solutions for autonomous driving utilize corner reflectors as static reference points to derive the location attribute. When deployed in the indoor environment, it is challenging, even for the mmWave radar with GHz bandwidth and a large antenna array, to separate the static reference points from other multipath reflectors. To tackle the static multipath, we propose to deploy a moving reference point (a moving robot) to fully harness the velocity resolution of mmWave radar. Specifically, we select a SLAM-capable robot to accurately obtain its locations under room coordinates during motion, without requiring human intervention. Accurately pairing the locations of the robot under two coordinate systems requires tight synchronization between the mmWave radar and the robot. We therefore propose a novel trajectory correspondence based calibration algorithm that takes the estimated trajectories of two systems as input, decoupling the operations of two systems to the maximum. Extensive experimental results demonstrate that the proposed calibration solution exhibits very high accuracy (1.74 cm and 0.43° accuracy for location and orientation respectively) and could ensure outstanding performance in three representative applications: fall detection, point cloud fusion, and long-distance human tracking.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chi-Jung Lee, David Yang, P. Ku, Hsin-Liu (Cindy) Kao
Sweat sensing affords monitoring essential bio-signals tailored for various well-being inspections. We present SweatSkin, the fabrication approach for customizable sweat-sensing on-skin interfaces. SweatSkin is unique in exploiting on-skin microfluidic channels to access bio-fluid secretes within the skin for personalized health monitoring. To lower the barrier to creating skin-conformable microfluidics capable of collecting and analyzing sweat, four fabrication methods utilizing accessible materials are proposed. Technical characterizations of paper- and polymer-based devices indicate that colorimetric analysis can effectively visualize sweat loss, chloride, glucose, and pH values. To support general to extreme sweating scenarios, we consulted five athletic experts on the SweatSkin devices' customization guidelines, application potential, and envisioned usages. The two-session fabrication workshop study with ten participants verified that the four fabrication methods are easy to learn and easy to make. Overall, SweatSkin is an extensible and user-friendly platform for designing and creating customizable on-skin sweat-sensing interfaces for UbiComp and HCI, affording ubiquitous personalized health sensing.
{"title":"SweatSkin","authors":"Chi-Jung Lee, David Yang, P. Ku, Hsin-Liu (Cindy) Kao","doi":"10.1145/3631425","DOIUrl":"https://doi.org/10.1145/3631425","url":null,"abstract":"Sweat sensing affords monitoring essential bio-signals tailored for various well-being inspections. We present SweatSkin, the fabrication approach for customizable sweat-sensing on-skin interfaces. SweatSkin is unique in exploiting on-skin microfluidic channels to access bio-fluid secretes within the skin for personalized health monitoring. To lower the barrier to creating skin-conformable microfluidics capable of collecting and analyzing sweat, four fabrication methods utilizing accessible materials are proposed. Technical characterizations of paper- and polymer-based devices indicate that colorimetric analysis can effectively visualize sweat loss, chloride, glucose, and pH values. To support general to extreme sweating scenarios, we consulted five athletic experts on the SweatSkin devices' customization guidelines, application potential, and envisioned usages. The two-session fabrication workshop study with ten participants verified that the four fabrication methods are easy to learn and easy to make. Overall, SweatSkin is an extensible and user-friendly platform for designing and creating customizable on-skin sweat-sensing interfaces for UbiComp and HCI, affording ubiquitous personalized health sensing.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Driver maneuver interaction learning (DMIL) refers to the classification task with the goal of identifying different driver-vehicle maneuver interactions (e.g., left/right turns). Existing conventional studies largely focused on the centralized collection of sensor data from the drivers' smartphones (say, inertial measurement units or IMUs, like accelerometer and gyroscope). Such a centralized mechanism might be precluded by data regulatory constraints. Furthermore, how to enable an adaptive and accurate DMIL framework remains challenging due to (i) complexity in heterogeneous driver maneuver patterns, and (ii) impacts of anomalous driver maneuvers due to, for instance, aggressive driving styles and behaviors. To overcome the above challenges, we propose AF-DMIL, an Anomaly-aware Federated Driver Maneuver Interaction Learning system. We focus on the real-world IMU sensor datasets (e.g., collected by smartphones) for our pilot case study. In particular, we have designed three heterogeneous representations for AF-DMIL regarding spectral, time series, and statistical features that are derived from the IMU sensor readings. We have designed a novel heterogeneous representation attention network (HetRANet) based on spectral channel attention, temporal sequence attention, and statistical feature learning mechanisms, jointly capturing and identifying the complex patterns within driver maneuver behaviors. Furthermore, we have designed a densely-connected convolutional neural network in HetRANet to enable the complex feature extraction and enhance the computational efficiency of HetRANet. In addition, we have designed within AF-DMIL a novel anomaly-aware federated learning approach for decentralized DMIL in response to anomalous maneuver data. To ease extraction of the maneuver patterns and evaluation of their mutual differences, we have designed an embedding projection network that projects the high-dimensional driver maneuver features into low-dimensional space, and further derives the exemplars that represent the driver maneuver patterns for mutual comparison. Then, AF-DMIL further leverages the mutual differences of the exemplars to identify those that exhibit anomalous patterns and deviate from others, and mitigates their impacts upon the federated DMIL. We have conducted extensive driver data analytics and experimental studies on three real-world datasets (one is harvested on our own) to evaluate the prototype of AF-DMIL, demonstrating AF-DMIL's accuracy and effectiveness compared to the state-of-the-art DMIL baselines (on average by more than 13% improvement in terms of DMIL accuracy), as well as fewer communication rounds (on average 29.20% fewer than existing distributed learning mechanisms).
{"title":"Driver Maneuver Interaction Identification with Anomaly-Aware Federated Learning on Heterogeneous Feature Representations","authors":"Mahan Tabatabaie, Suining He","doi":"10.1145/3631421","DOIUrl":"https://doi.org/10.1145/3631421","url":null,"abstract":"Driver maneuver interaction learning (DMIL) refers to the classification task with the goal of identifying different driver-vehicle maneuver interactions (e.g., left/right turns). Existing conventional studies largely focused on the centralized collection of sensor data from the drivers' smartphones (say, inertial measurement units or IMUs, like accelerometer and gyroscope). Such a centralized mechanism might be precluded by data regulatory constraints. Furthermore, how to enable an adaptive and accurate DMIL framework remains challenging due to (i) complexity in heterogeneous driver maneuver patterns, and (ii) impacts of anomalous driver maneuvers due to, for instance, aggressive driving styles and behaviors. To overcome the above challenges, we propose AF-DMIL, an Anomaly-aware Federated Driver Maneuver Interaction Learning system. We focus on the real-world IMU sensor datasets (e.g., collected by smartphones) for our pilot case study. In particular, we have designed three heterogeneous representations for AF-DMIL regarding spectral, time series, and statistical features that are derived from the IMU sensor readings. We have designed a novel heterogeneous representation attention network (HetRANet) based on spectral channel attention, temporal sequence attention, and statistical feature learning mechanisms, jointly capturing and identifying the complex patterns within driver maneuver behaviors. Furthermore, we have designed a densely-connected convolutional neural network in HetRANet to enable the complex feature extraction and enhance the computational efficiency of HetRANet. In addition, we have designed within AF-DMIL a novel anomaly-aware federated learning approach for decentralized DMIL in response to anomalous maneuver data. To ease extraction of the maneuver patterns and evaluation of their mutual differences, we have designed an embedding projection network that projects the high-dimensional driver maneuver features into low-dimensional space, and further derives the exemplars that represent the driver maneuver patterns for mutual comparison. Then, AF-DMIL further leverages the mutual differences of the exemplars to identify those that exhibit anomalous patterns and deviate from others, and mitigates their impacts upon the federated DMIL. We have conducted extensive driver data analytics and experimental studies on three real-world datasets (one is harvested on our own) to evaluate the prototype of AF-DMIL, demonstrating AF-DMIL's accuracy and effectiveness compared to the state-of-the-art DMIL baselines (on average by more than 13% improvement in terms of DMIL accuracy), as well as fewer communication rounds (on average 29.20% fewer than existing distributed learning mechanisms).","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139438009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shared Mixed Reality experiences allow two co-located users to collaborate on both physical and digital tasks with familiar social protocols. However, extending the same to remote collaboration is limited by cumbersome setups for aligning distinct physical environments and the lack of access to remote physical artifacts. We present SurfShare, a general-purpose symmetric remote collaboration system with mixed-reality head-mounted displays (HMDs). Our system shares a spatially consistent physical-virtual workspace between two remote users, anchored on a physical plane in each environment (e.g., a desk or wall). The video feed of each user's physical surface is overlaid virtually on the other side, creating a shared view of the physical space. We integrate the physical and virtual workspace through virtual replication. Users can transmute physical objects to the virtual space as virtual replicas. Our system is lightweight, implemented using only the capabilities of the headset, without requiring any modifications to the environment (e.g. cameras or motion tracking hardware). We discuss the design, implementation, and interaction capabilities of our prototype, and demonstrate the utility of SurfShare through four example applications. In a user experiment with a comprehensive prototyping task, we found that SurfShare provides a physical-virtual workspace that supports low-fi prototyping with flexible proxemics and fluid collaboration dynamics.
{"title":"SurfShare","authors":"Xincheng Huang, Robert Xiao","doi":"10.1145/3631418","DOIUrl":"https://doi.org/10.1145/3631418","url":null,"abstract":"Shared Mixed Reality experiences allow two co-located users to collaborate on both physical and digital tasks with familiar social protocols. However, extending the same to remote collaboration is limited by cumbersome setups for aligning distinct physical environments and the lack of access to remote physical artifacts. We present SurfShare, a general-purpose symmetric remote collaboration system with mixed-reality head-mounted displays (HMDs). Our system shares a spatially consistent physical-virtual workspace between two remote users, anchored on a physical plane in each environment (e.g., a desk or wall). The video feed of each user's physical surface is overlaid virtually on the other side, creating a shared view of the physical space. We integrate the physical and virtual workspace through virtual replication. Users can transmute physical objects to the virtual space as virtual replicas. Our system is lightweight, implemented using only the capabilities of the headset, without requiring any modifications to the environment (e.g. cameras or motion tracking hardware). We discuss the design, implementation, and interaction capabilities of our prototype, and demonstrate the utility of SurfShare through four example applications. In a user experiment with a comprehensive prototyping task, we found that SurfShare provides a physical-virtual workspace that supports low-fi prototyping with flexible proxemics and fluid collaboration dynamics.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139438032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dawei Yan, Panlong Yang, Fei Shang, Weiwei Jiang, Xiang-Yang Li
WiFi has gradually developed into one of the main candidate technologies for indoor environment sensing. In this paper, we are interested in using COTS WiFi devices to identify material details, including location, material type, and shape, of stationary objects in the surrounding environment, which may open up new opportunities for many applications. Specifically, we present Wi-Painter, a model-driven system that can accurately detects smooth-surfaced material types and their edges using COTS WiFi devices without modification. Different from previous arts for material identification, Wi-Painter subdivides the target into individual 2D pixels, and simultaneously forms a 2D image based on identifying the material type of each pixel. The key idea of Wi-Painter is to exploit the complex permittivity of the object surface which can be estimated by the different reflectivity of signals with different polarization directions. In particular, we construct the multi-incident angle model to characterize the material, using only the power ratios of the vertically and horizontally polarized signals measured at several different incident angles, which avoids the use of inaccurate WiFi signal phases. We implement and evaluate Wi-Painter in the real world, showing an average classification accuracy of 93.4% for different material types including metal, wood, rubber and plastic of different sizes and thicknesses, and across different environments. In addition, Wi-Painter can accurately detect the material type and edge of the word "LOVE" spliced with different materials, with an average size of 60cm × 80cm, and material edges with different orientations.
{"title":"Wi-Painter","authors":"Dawei Yan, Panlong Yang, Fei Shang, Weiwei Jiang, Xiang-Yang Li","doi":"10.1145/3633809","DOIUrl":"https://doi.org/10.1145/3633809","url":null,"abstract":"WiFi has gradually developed into one of the main candidate technologies for indoor environment sensing. In this paper, we are interested in using COTS WiFi devices to identify material details, including location, material type, and shape, of stationary objects in the surrounding environment, which may open up new opportunities for many applications. Specifically, we present Wi-Painter, a model-driven system that can accurately detects smooth-surfaced material types and their edges using COTS WiFi devices without modification. Different from previous arts for material identification, Wi-Painter subdivides the target into individual 2D pixels, and simultaneously forms a 2D image based on identifying the material type of each pixel. The key idea of Wi-Painter is to exploit the complex permittivity of the object surface which can be estimated by the different reflectivity of signals with different polarization directions. In particular, we construct the multi-incident angle model to characterize the material, using only the power ratios of the vertically and horizontally polarized signals measured at several different incident angles, which avoids the use of inaccurate WiFi signal phases. We implement and evaluate Wi-Painter in the real world, showing an average classification accuracy of 93.4% for different material types including metal, wood, rubber and plastic of different sizes and thicknesses, and across different environments. In addition, Wi-Painter can accurately detect the material type and edge of the word \"LOVE\" spliced with different materials, with an average size of 60cm × 80cm, and material edges with different orientations.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139438034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless sensing technology allows for non-intrusive sensing without the need for physical sensors worn by the target, enabling a wide range of applications, such as indoor tracking, and activity identification. To theoretically reveal the fundamental principles of wireless sensing, the Fresnel zone model has been introduced in the field of Wi-Fi sensing. While the Fresnel zone model is effective in explaining the sensing mechanism in line-of-sight (LoS) scenarios, achieving accurate sensing in non-line-of-sight (NLoS) situations continues to pose a significant challenge. In this paper, we propose a novel theoretical model called the Hyperbolic zone to reveal the fundamental sensing mechanism in NLoS scenarios. The main principle is to eliminate the complex NLoS path shared among different transmitter-receiver pairs, which allows us to obtain a series of simple "virtual" reflection paths among receivers. Since these "virtual" reflection paths satisfy the properties of the hyperbola, we propose the hyperbolic tracking model. Based on the proposed model, we implement the HyperTracking system using commercial Wi-Fi devices. The experimental results show that the proposed hyperbolic model is suitable for accurate tracking in both LoS and NLoS scenarios. We can reduce 0.36 m tracking error in contrast to the Fresnel zone model in NLoS scenarios. When we utilize the proposed hyperbolic model to train a typical LSTM neural network, we are able to further reduce the tracking error by 0.13 m and save the execution time by 281% with the same data. As a whole, our method can reduce the tracking error by 54% in NLoS scenarios compared with the Fresnel zone model.
{"title":"HyperTracking","authors":"Xiaoqiang Xu, Xuanqi Meng, Xinyu Tong, Xiulong Liu, Xin Xie, Wenyu Qu","doi":"10.1145/3631434","DOIUrl":"https://doi.org/10.1145/3631434","url":null,"abstract":"Wireless sensing technology allows for non-intrusive sensing without the need for physical sensors worn by the target, enabling a wide range of applications, such as indoor tracking, and activity identification. To theoretically reveal the fundamental principles of wireless sensing, the Fresnel zone model has been introduced in the field of Wi-Fi sensing. While the Fresnel zone model is effective in explaining the sensing mechanism in line-of-sight (LoS) scenarios, achieving accurate sensing in non-line-of-sight (NLoS) situations continues to pose a significant challenge. In this paper, we propose a novel theoretical model called the Hyperbolic zone to reveal the fundamental sensing mechanism in NLoS scenarios. The main principle is to eliminate the complex NLoS path shared among different transmitter-receiver pairs, which allows us to obtain a series of simple \"virtual\" reflection paths among receivers. Since these \"virtual\" reflection paths satisfy the properties of the hyperbola, we propose the hyperbolic tracking model. Based on the proposed model, we implement the HyperTracking system using commercial Wi-Fi devices. The experimental results show that the proposed hyperbolic model is suitable for accurate tracking in both LoS and NLoS scenarios. We can reduce 0.36 m tracking error in contrast to the Fresnel zone model in NLoS scenarios. When we utilize the proposed hyperbolic model to train a typical LSTM neural network, we are able to further reduce the tracking error by 0.13 m and save the execution time by 281% with the same data. As a whole, our method can reduce the tracking error by 54% in NLoS scenarios compared with the Fresnel zone model.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}