Chi-Jung Lee, David Yang, P. Ku, Hsin-Liu (Cindy) Kao
Sweat sensing affords monitoring essential bio-signals tailored for various well-being inspections. We present SweatSkin, the fabrication approach for customizable sweat-sensing on-skin interfaces. SweatSkin is unique in exploiting on-skin microfluidic channels to access bio-fluid secretes within the skin for personalized health monitoring. To lower the barrier to creating skin-conformable microfluidics capable of collecting and analyzing sweat, four fabrication methods utilizing accessible materials are proposed. Technical characterizations of paper- and polymer-based devices indicate that colorimetric analysis can effectively visualize sweat loss, chloride, glucose, and pH values. To support general to extreme sweating scenarios, we consulted five athletic experts on the SweatSkin devices' customization guidelines, application potential, and envisioned usages. The two-session fabrication workshop study with ten participants verified that the four fabrication methods are easy to learn and easy to make. Overall, SweatSkin is an extensible and user-friendly platform for designing and creating customizable on-skin sweat-sensing interfaces for UbiComp and HCI, affording ubiquitous personalized health sensing.
{"title":"SweatSkin","authors":"Chi-Jung Lee, David Yang, P. Ku, Hsin-Liu (Cindy) Kao","doi":"10.1145/3631425","DOIUrl":"https://doi.org/10.1145/3631425","url":null,"abstract":"Sweat sensing affords monitoring essential bio-signals tailored for various well-being inspections. We present SweatSkin, the fabrication approach for customizable sweat-sensing on-skin interfaces. SweatSkin is unique in exploiting on-skin microfluidic channels to access bio-fluid secretes within the skin for personalized health monitoring. To lower the barrier to creating skin-conformable microfluidics capable of collecting and analyzing sweat, four fabrication methods utilizing accessible materials are proposed. Technical characterizations of paper- and polymer-based devices indicate that colorimetric analysis can effectively visualize sweat loss, chloride, glucose, and pH values. To support general to extreme sweating scenarios, we consulted five athletic experts on the SweatSkin devices' customization guidelines, application potential, and envisioned usages. The two-session fabrication workshop study with ten participants verified that the four fabrication methods are easy to learn and easy to make. Overall, SweatSkin is an extensible and user-friendly platform for designing and creating customizable on-skin sweat-sensing interfaces for UbiComp and HCI, affording ubiquitous personalized health sensing.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"1 7","pages":"1 - 30"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Driver maneuver interaction learning (DMIL) refers to the classification task with the goal of identifying different driver-vehicle maneuver interactions (e.g., left/right turns). Existing conventional studies largely focused on the centralized collection of sensor data from the drivers' smartphones (say, inertial measurement units or IMUs, like accelerometer and gyroscope). Such a centralized mechanism might be precluded by data regulatory constraints. Furthermore, how to enable an adaptive and accurate DMIL framework remains challenging due to (i) complexity in heterogeneous driver maneuver patterns, and (ii) impacts of anomalous driver maneuvers due to, for instance, aggressive driving styles and behaviors. To overcome the above challenges, we propose AF-DMIL, an Anomaly-aware Federated Driver Maneuver Interaction Learning system. We focus on the real-world IMU sensor datasets (e.g., collected by smartphones) for our pilot case study. In particular, we have designed three heterogeneous representations for AF-DMIL regarding spectral, time series, and statistical features that are derived from the IMU sensor readings. We have designed a novel heterogeneous representation attention network (HetRANet) based on spectral channel attention, temporal sequence attention, and statistical feature learning mechanisms, jointly capturing and identifying the complex patterns within driver maneuver behaviors. Furthermore, we have designed a densely-connected convolutional neural network in HetRANet to enable the complex feature extraction and enhance the computational efficiency of HetRANet. In addition, we have designed within AF-DMIL a novel anomaly-aware federated learning approach for decentralized DMIL in response to anomalous maneuver data. To ease extraction of the maneuver patterns and evaluation of their mutual differences, we have designed an embedding projection network that projects the high-dimensional driver maneuver features into low-dimensional space, and further derives the exemplars that represent the driver maneuver patterns for mutual comparison. Then, AF-DMIL further leverages the mutual differences of the exemplars to identify those that exhibit anomalous patterns and deviate from others, and mitigates their impacts upon the federated DMIL. We have conducted extensive driver data analytics and experimental studies on three real-world datasets (one is harvested on our own) to evaluate the prototype of AF-DMIL, demonstrating AF-DMIL's accuracy and effectiveness compared to the state-of-the-art DMIL baselines (on average by more than 13% improvement in terms of DMIL accuracy), as well as fewer communication rounds (on average 29.20% fewer than existing distributed learning mechanisms).
{"title":"Driver Maneuver Interaction Identification with Anomaly-Aware Federated Learning on Heterogeneous Feature Representations","authors":"Mahan Tabatabaie, Suining He","doi":"10.1145/3631421","DOIUrl":"https://doi.org/10.1145/3631421","url":null,"abstract":"Driver maneuver interaction learning (DMIL) refers to the classification task with the goal of identifying different driver-vehicle maneuver interactions (e.g., left/right turns). Existing conventional studies largely focused on the centralized collection of sensor data from the drivers' smartphones (say, inertial measurement units or IMUs, like accelerometer and gyroscope). Such a centralized mechanism might be precluded by data regulatory constraints. Furthermore, how to enable an adaptive and accurate DMIL framework remains challenging due to (i) complexity in heterogeneous driver maneuver patterns, and (ii) impacts of anomalous driver maneuvers due to, for instance, aggressive driving styles and behaviors. To overcome the above challenges, we propose AF-DMIL, an Anomaly-aware Federated Driver Maneuver Interaction Learning system. We focus on the real-world IMU sensor datasets (e.g., collected by smartphones) for our pilot case study. In particular, we have designed three heterogeneous representations for AF-DMIL regarding spectral, time series, and statistical features that are derived from the IMU sensor readings. We have designed a novel heterogeneous representation attention network (HetRANet) based on spectral channel attention, temporal sequence attention, and statistical feature learning mechanisms, jointly capturing and identifying the complex patterns within driver maneuver behaviors. Furthermore, we have designed a densely-connected convolutional neural network in HetRANet to enable the complex feature extraction and enhance the computational efficiency of HetRANet. In addition, we have designed within AF-DMIL a novel anomaly-aware federated learning approach for decentralized DMIL in response to anomalous maneuver data. To ease extraction of the maneuver patterns and evaluation of their mutual differences, we have designed an embedding projection network that projects the high-dimensional driver maneuver features into low-dimensional space, and further derives the exemplars that represent the driver maneuver patterns for mutual comparison. Then, AF-DMIL further leverages the mutual differences of the exemplars to identify those that exhibit anomalous patterns and deviate from others, and mitigates their impacts upon the federated DMIL. We have conducted extensive driver data analytics and experimental studies on three real-world datasets (one is harvested on our own) to evaluate the prototype of AF-DMIL, demonstrating AF-DMIL's accuracy and effectiveness compared to the state-of-the-art DMIL baselines (on average by more than 13% improvement in terms of DMIL accuracy), as well as fewer communication rounds (on average 29.20% fewer than existing distributed learning mechanisms).","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"8 4","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139438009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shared Mixed Reality experiences allow two co-located users to collaborate on both physical and digital tasks with familiar social protocols. However, extending the same to remote collaboration is limited by cumbersome setups for aligning distinct physical environments and the lack of access to remote physical artifacts. We present SurfShare, a general-purpose symmetric remote collaboration system with mixed-reality head-mounted displays (HMDs). Our system shares a spatially consistent physical-virtual workspace between two remote users, anchored on a physical plane in each environment (e.g., a desk or wall). The video feed of each user's physical surface is overlaid virtually on the other side, creating a shared view of the physical space. We integrate the physical and virtual workspace through virtual replication. Users can transmute physical objects to the virtual space as virtual replicas. Our system is lightweight, implemented using only the capabilities of the headset, without requiring any modifications to the environment (e.g. cameras or motion tracking hardware). We discuss the design, implementation, and interaction capabilities of our prototype, and demonstrate the utility of SurfShare through four example applications. In a user experiment with a comprehensive prototyping task, we found that SurfShare provides a physical-virtual workspace that supports low-fi prototyping with flexible proxemics and fluid collaboration dynamics.
{"title":"SurfShare","authors":"Xincheng Huang, Robert Xiao","doi":"10.1145/3631418","DOIUrl":"https://doi.org/10.1145/3631418","url":null,"abstract":"Shared Mixed Reality experiences allow two co-located users to collaborate on both physical and digital tasks with familiar social protocols. However, extending the same to remote collaboration is limited by cumbersome setups for aligning distinct physical environments and the lack of access to remote physical artifacts. We present SurfShare, a general-purpose symmetric remote collaboration system with mixed-reality head-mounted displays (HMDs). Our system shares a spatially consistent physical-virtual workspace between two remote users, anchored on a physical plane in each environment (e.g., a desk or wall). The video feed of each user's physical surface is overlaid virtually on the other side, creating a shared view of the physical space. We integrate the physical and virtual workspace through virtual replication. Users can transmute physical objects to the virtual space as virtual replicas. Our system is lightweight, implemented using only the capabilities of the headset, without requiring any modifications to the environment (e.g. cameras or motion tracking hardware). We discuss the design, implementation, and interaction capabilities of our prototype, and demonstrate the utility of SurfShare through four example applications. In a user experiment with a comprehensive prototyping task, we found that SurfShare provides a physical-virtual workspace that supports low-fi prototyping with flexible proxemics and fluid collaboration dynamics.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"1 6","pages":"1 - 24"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139438032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dawei Yan, Panlong Yang, Fei Shang, Weiwei Jiang, Xiang-Yang Li
WiFi has gradually developed into one of the main candidate technologies for indoor environment sensing. In this paper, we are interested in using COTS WiFi devices to identify material details, including location, material type, and shape, of stationary objects in the surrounding environment, which may open up new opportunities for many applications. Specifically, we present Wi-Painter, a model-driven system that can accurately detects smooth-surfaced material types and their edges using COTS WiFi devices without modification. Different from previous arts for material identification, Wi-Painter subdivides the target into individual 2D pixels, and simultaneously forms a 2D image based on identifying the material type of each pixel. The key idea of Wi-Painter is to exploit the complex permittivity of the object surface which can be estimated by the different reflectivity of signals with different polarization directions. In particular, we construct the multi-incident angle model to characterize the material, using only the power ratios of the vertically and horizontally polarized signals measured at several different incident angles, which avoids the use of inaccurate WiFi signal phases. We implement and evaluate Wi-Painter in the real world, showing an average classification accuracy of 93.4% for different material types including metal, wood, rubber and plastic of different sizes and thicknesses, and across different environments. In addition, Wi-Painter can accurately detect the material type and edge of the word "LOVE" spliced with different materials, with an average size of 60cm × 80cm, and material edges with different orientations.
{"title":"Wi-Painter","authors":"Dawei Yan, Panlong Yang, Fei Shang, Weiwei Jiang, Xiang-Yang Li","doi":"10.1145/3633809","DOIUrl":"https://doi.org/10.1145/3633809","url":null,"abstract":"WiFi has gradually developed into one of the main candidate technologies for indoor environment sensing. In this paper, we are interested in using COTS WiFi devices to identify material details, including location, material type, and shape, of stationary objects in the surrounding environment, which may open up new opportunities for many applications. Specifically, we present Wi-Painter, a model-driven system that can accurately detects smooth-surfaced material types and their edges using COTS WiFi devices without modification. Different from previous arts for material identification, Wi-Painter subdivides the target into individual 2D pixels, and simultaneously forms a 2D image based on identifying the material type of each pixel. The key idea of Wi-Painter is to exploit the complex permittivity of the object surface which can be estimated by the different reflectivity of signals with different polarization directions. In particular, we construct the multi-incident angle model to characterize the material, using only the power ratios of the vertically and horizontally polarized signals measured at several different incident angles, which avoids the use of inaccurate WiFi signal phases. We implement and evaluate Wi-Painter in the real world, showing an average classification accuracy of 93.4% for different material types including metal, wood, rubber and plastic of different sizes and thicknesses, and across different environments. In addition, Wi-Painter can accurately detect the material type and edge of the word \"LOVE\" spliced with different materials, with an average size of 60cm × 80cm, and material edges with different orientations.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"1 4","pages":"1 - 25"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139438034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yasmine Djebrouni, Nawel Benarba, Ousmane Touat, Pasquale De Rosa, Sara Bouchenak, Angela Bonifati, Pascal Felber, Vania Marangozova, V. Schiavoni
Federated learning (FL) is a distributed machine learning paradigm that enables data owners to collaborate on training models while preserving data privacy. As FL effectively leverages decentralized and sensitive data sources, it is increasingly used in ubiquitous computing including remote healthcare, activity recognition, and mobile applications. However, FL raises ethical and social concerns as it may introduce bias with regard to sensitive attributes such as race, gender, and location. Mitigating FL bias is thus a major research challenge. In this paper, we propose Astral, a novel bias mitigation system for FL. Astral provides a novel model aggregation approach to select the most effective aggregation weights to combine FL clients' models. It guarantees a predefined fairness objective by constraining bias below a given threshold while keeping model accuracy as high as possible. Astral handles the bias of single and multiple sensitive attributes and supports all bias metrics. Our comprehensive evaluation on seven real-world datasets with three popular bias metrics shows that Astral outperforms state-of-the-art FL bias mitigation techniques in terms of bias mitigation and model accuracy. Moreover, we show that Astral is robust against data heterogeneity and scalable in terms of data size and number of FL clients. Astral's code base is publicly available.
{"title":"Bias Mitigation in Federated Learning for Edge Computing","authors":"Yasmine Djebrouni, Nawel Benarba, Ousmane Touat, Pasquale De Rosa, Sara Bouchenak, Angela Bonifati, Pascal Felber, Vania Marangozova, V. Schiavoni","doi":"10.1145/3631455","DOIUrl":"https://doi.org/10.1145/3631455","url":null,"abstract":"Federated learning (FL) is a distributed machine learning paradigm that enables data owners to collaborate on training models while preserving data privacy. As FL effectively leverages decentralized and sensitive data sources, it is increasingly used in ubiquitous computing including remote healthcare, activity recognition, and mobile applications. However, FL raises ethical and social concerns as it may introduce bias with regard to sensitive attributes such as race, gender, and location. Mitigating FL bias is thus a major research challenge. In this paper, we propose Astral, a novel bias mitigation system for FL. Astral provides a novel model aggregation approach to select the most effective aggregation weights to combine FL clients' models. It guarantees a predefined fairness objective by constraining bias below a given threshold while keeping model accuracy as high as possible. Astral handles the bias of single and multiple sensitive attributes and supports all bias metrics. Our comprehensive evaluation on seven real-world datasets with three popular bias metrics shows that Astral outperforms state-of-the-art FL bias mitigation techniques in terms of bias mitigation and model accuracy. Moreover, we show that Astral is robust against data heterogeneity and scalable in terms of data size and number of FL clients. Astral's code base is publicly available.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"10 3","pages":"1 - 35"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep Learning models are a standard solution for sensor-based Human Activity Recognition (HAR), but their deployment is often limited by labeled data scarcity and models' opacity. Neuro-Symbolic AI (NeSy) provides an interesting research direction to mitigate these issues by infusing knowledge about context information into HAR deep learning classifiers. However, existing NeSy methods for context-aware HAR require computationally expensive symbolic reasoners during classification, making them less suitable for deployment on resource-constrained devices (e.g., mobile devices). Additionally, NeSy approaches for context-aware HAR have never been evaluated on in-the-wild datasets, and their generalization capabilities in real-world scenarios are questionable. In this work, we propose a novel approach based on a semantic loss function that infuses knowledge constraints in the HAR model during the training phase, avoiding symbolic reasoning during classification. Our results on scripted and in-the-wild datasets show the impact of different semantic loss functions in outperforming a purely data-driven model. We also compare our solution with existing NeSy methods and analyze each approach's strengths and weaknesses. Our semantic loss remains the only NeSy solution that can be deployed as a single DNN without the need for symbolic reasoning modules, reaching recognition rates close (and better in some cases) to existing approaches.
深度学习模型是基于传感器的人类活动识别(HAR)的标准解决方案,但其部署往往受到标记数据稀缺和模型不透明的限制。神经符号人工智能(NeSy)提供了一个有趣的研究方向,通过将上下文信息知识注入 HAR 深度学习分类器来缓解这些问题。然而,现有的用于上下文感知 HAR 的 NeSy 方法在分类过程中需要计算昂贵的符号推理器,因此不太适合部署在资源受限的设备(如移动设备)上。此外,用于上下文感知 HAR 的 NeSy 方法从未在实际数据集上进行过评估,其在真实世界场景中的泛化能力也值得怀疑。在这项工作中,我们提出了一种基于语义损失函数的新方法,在训练阶段将知识约束注入 HAR 模型,避免了分类过程中的符号推理。我们在脚本数据集和野生数据集上的研究结果表明,不同的语义损失函数对超越纯数据驱动模型的影响。我们还将我们的解决方案与现有的 NeSy 方法进行了比较,并分析了每种方法的优缺点。我们的语义损失仍然是唯一可以作为单一 DNN 部署的 NeSy 解决方案,无需符号推理模块,识别率接近(在某些情况下甚至更高)现有方法。
{"title":"Semantic Loss","authors":"Luca Arrotta, Gabriele Civitarese, Claudio Bettini","doi":"10.1145/3631407","DOIUrl":"https://doi.org/10.1145/3631407","url":null,"abstract":"Deep Learning models are a standard solution for sensor-based Human Activity Recognition (HAR), but their deployment is often limited by labeled data scarcity and models' opacity. Neuro-Symbolic AI (NeSy) provides an interesting research direction to mitigate these issues by infusing knowledge about context information into HAR deep learning classifiers. However, existing NeSy methods for context-aware HAR require computationally expensive symbolic reasoners during classification, making them less suitable for deployment on resource-constrained devices (e.g., mobile devices). Additionally, NeSy approaches for context-aware HAR have never been evaluated on in-the-wild datasets, and their generalization capabilities in real-world scenarios are questionable. In this work, we propose a novel approach based on a semantic loss function that infuses knowledge constraints in the HAR model during the training phase, avoiding symbolic reasoning during classification. Our results on scripted and in-the-wild datasets show the impact of different semantic loss functions in outperforming a purely data-driven model. We also compare our solution with existing NeSy methods and analyze each approach's strengths and weaknesses. Our semantic loss remains the only NeSy solution that can be deployed as a single DNN without the need for symbolic reasoning modules, reaching recognition rates close (and better in some cases) to existing approaches.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"2 8","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139438019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless sensing technology allows for non-intrusive sensing without the need for physical sensors worn by the target, enabling a wide range of applications, such as indoor tracking, and activity identification. To theoretically reveal the fundamental principles of wireless sensing, the Fresnel zone model has been introduced in the field of Wi-Fi sensing. While the Fresnel zone model is effective in explaining the sensing mechanism in line-of-sight (LoS) scenarios, achieving accurate sensing in non-line-of-sight (NLoS) situations continues to pose a significant challenge. In this paper, we propose a novel theoretical model called the Hyperbolic zone to reveal the fundamental sensing mechanism in NLoS scenarios. The main principle is to eliminate the complex NLoS path shared among different transmitter-receiver pairs, which allows us to obtain a series of simple "virtual" reflection paths among receivers. Since these "virtual" reflection paths satisfy the properties of the hyperbola, we propose the hyperbolic tracking model. Based on the proposed model, we implement the HyperTracking system using commercial Wi-Fi devices. The experimental results show that the proposed hyperbolic model is suitable for accurate tracking in both LoS and NLoS scenarios. We can reduce 0.36 m tracking error in contrast to the Fresnel zone model in NLoS scenarios. When we utilize the proposed hyperbolic model to train a typical LSTM neural network, we are able to further reduce the tracking error by 0.13 m and save the execution time by 281% with the same data. As a whole, our method can reduce the tracking error by 54% in NLoS scenarios compared with the Fresnel zone model.
{"title":"HyperTracking","authors":"Xiaoqiang Xu, Xuanqi Meng, Xinyu Tong, Xiulong Liu, Xin Xie, Wenyu Qu","doi":"10.1145/3631434","DOIUrl":"https://doi.org/10.1145/3631434","url":null,"abstract":"Wireless sensing technology allows for non-intrusive sensing without the need for physical sensors worn by the target, enabling a wide range of applications, such as indoor tracking, and activity identification. To theoretically reveal the fundamental principles of wireless sensing, the Fresnel zone model has been introduced in the field of Wi-Fi sensing. While the Fresnel zone model is effective in explaining the sensing mechanism in line-of-sight (LoS) scenarios, achieving accurate sensing in non-line-of-sight (NLoS) situations continues to pose a significant challenge. In this paper, we propose a novel theoretical model called the Hyperbolic zone to reveal the fundamental sensing mechanism in NLoS scenarios. The main principle is to eliminate the complex NLoS path shared among different transmitter-receiver pairs, which allows us to obtain a series of simple \"virtual\" reflection paths among receivers. Since these \"virtual\" reflection paths satisfy the properties of the hyperbola, we propose the hyperbolic tracking model. Based on the proposed model, we implement the HyperTracking system using commercial Wi-Fi devices. The experimental results show that the proposed hyperbolic model is suitable for accurate tracking in both LoS and NLoS scenarios. We can reduce 0.36 m tracking error in contrast to the Fresnel zone model in NLoS scenarios. When we utilize the proposed hyperbolic model to train a typical LSTM neural network, we are able to further reduce the tracking error by 0.13 m and save the execution time by 281% with the same data. As a whole, our method can reduce the tracking error by 54% in NLoS scenarios compared with the Fresnel zone model.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"13 11","pages":"1 - 26"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tooth brushing monitors have the potential to enhance oral hygiene and encourage the development of healthy brushing habits. However, previous studies fall short of recognizing each tooth due to limitations in external sensors and variations among users. To address these challenges, we present ToothFairy, a real-time tooth-by-tooth brushing monitor that uses earphone reverse signals captured within the oral cavity to identify each tooth during brushing. The key component of ToothFairy is a novel bone-conducted acoustic attenuation model, which quantifies sound propagation within the oral cavity. This model eliminates the need for machine learning and can be calibrated with just one second of brushing data for each tooth by a new user. ToothFairy also addresses practical issues such as brushing detection and tooth region determination. Results from extensive experiments, involving 10 volunteers and 25 combinations of five commercial off-the-shelf toothbrush and earphone models each, show that ToothFairy achieves tooth recognition with an average accuracy of 90.5%.
{"title":"ToothFairy","authors":"Yang Wang, Feng Hong, Yufei Jiang, Chenyu Bao, Chao Liu, Zhongwen Guo","doi":"10.1145/3631412","DOIUrl":"https://doi.org/10.1145/3631412","url":null,"abstract":"Tooth brushing monitors have the potential to enhance oral hygiene and encourage the development of healthy brushing habits. However, previous studies fall short of recognizing each tooth due to limitations in external sensors and variations among users. To address these challenges, we present ToothFairy, a real-time tooth-by-tooth brushing monitor that uses earphone reverse signals captured within the oral cavity to identify each tooth during brushing. The key component of ToothFairy is a novel bone-conducted acoustic attenuation model, which quantifies sound propagation within the oral cavity. This model eliminates the need for machine learning and can be calibrated with just one second of brushing data for each tooth by a new user. ToothFairy also addresses practical issues such as brushing detection and tooth region determination. Results from extensive experiments, involving 10 volunteers and 25 combinations of five commercial off-the-shelf toothbrush and earphone models each, show that ToothFairy achieves tooth recognition with an average accuracy of 90.5%.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"12 50","pages":"1 - 19"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anran Xu, Zhongyi Zhou, Kakeru Miyazaki, Ryo Yoshikawa, S. Hosio, Koji Yatani
The world today is increasingly visual. Many of the most popular online social networking services are largely powered by images, making image privacy protection a critical research topic in the fields of ubiquitous computing, usable security, and human-computer interaction (HCI). One topical issue is understanding privacy-threatening content in images that are shared online. This dataset article introduces DIPA2, an open-sourced image dataset that offers object-level annotations with high-level reasoning properties to show perceptions of privacy among different cultures. DIPA2 provides 5,897 annotations describing perceived privacy risks of 3,347 objects in 1,304 images. The annotations contain the type of the object and four additional privacy metrics: 1) information type indicating what kind of information may leak if the image containing the object is shared, 2) a 7-point Likert item estimating the perceived severity of privacy leakages, and 3) intended recipient scopes when annotators assume they are either image owners or allowing others to repost the image. Our dataset contains unique data from two cultures: We recruited annotators from both Japan and the U.K. to demonstrate the impact of culture on object-level privacy perceptions. In this paper, we first illustrate how we designed and performed the construction of DIPA2, along with data analysis of the collected annotations. Second, we provide two machine-learning baselines to demonstrate how DIPA2 challenges the current image privacy recognition task. DIPA2 facilitates various types of research on image privacy, including machine learning methods inferring privacy threats in complex scenarios, quantitative analysis of cultural influences on privacy preferences, understanding of image sharing behaviors, and promotion of cyber hygiene for general user populations.
{"title":"DIPA2","authors":"Anran Xu, Zhongyi Zhou, Kakeru Miyazaki, Ryo Yoshikawa, S. Hosio, Koji Yatani","doi":"10.1145/3631439","DOIUrl":"https://doi.org/10.1145/3631439","url":null,"abstract":"The world today is increasingly visual. Many of the most popular online social networking services are largely powered by images, making image privacy protection a critical research topic in the fields of ubiquitous computing, usable security, and human-computer interaction (HCI). One topical issue is understanding privacy-threatening content in images that are shared online. This dataset article introduces DIPA2, an open-sourced image dataset that offers object-level annotations with high-level reasoning properties to show perceptions of privacy among different cultures. DIPA2 provides 5,897 annotations describing perceived privacy risks of 3,347 objects in 1,304 images. The annotations contain the type of the object and four additional privacy metrics: 1) information type indicating what kind of information may leak if the image containing the object is shared, 2) a 7-point Likert item estimating the perceived severity of privacy leakages, and 3) intended recipient scopes when annotators assume they are either image owners or allowing others to repost the image. Our dataset contains unique data from two cultures: We recruited annotators from both Japan and the U.K. to demonstrate the impact of culture on object-level privacy perceptions. In this paper, we first illustrate how we designed and performed the construction of DIPA2, along with data analysis of the collected annotations. Second, we provide two machine-learning baselines to demonstrate how DIPA2 challenges the current image privacy recognition task. DIPA2 facilitates various types of research on image privacy, including machine learning methods inferring privacy threats in complex scenarios, quantitative analysis of cultural influences on privacy preferences, understanding of image sharing behaviors, and promotion of cyber hygiene for general user populations.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"11 3","pages":"1 - 30"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. U. Demirel, Ting Dang, Khaldoon Al-Naimi, F. Kawsar, A. Montanari
Earables (in-ear wearables) are gaining increasing attention for sensing applications and healthcare research thanks to their ergonomy and non-invasive nature. However, air leakages between the device and the user's ear, resulting from daily activities or wearing variabilities, can decrease the performance of applications, interfere with calibrations, and reduce the robustness of the overall system. Existing literature lacks established methods for estimating the degree of air leaks (i.e., seal integrity) to provide information for the earable applications. In this work, we proposed a novel unobtrusive method for estimating the air leakage level of earbuds based on an in-ear microphone. The proposed method aims to estimate the magnitude of distortions, reflections, and external noise in the ear canal while excluding the speaker output by learning the speaker-to-microphone transfer function which allows us to perform the task unobtrusively. Using the obtained residual signal in the ear canal, we extract three features and deploy a machine-learning model for estimating the air leakage level. We investigated our system under various conditions to validate its robustness and resilience against the motion and other artefacts. Our extensive experimental evaluation shows that the proposed method can track air leakage levels under different daily activities. "The best computer is a quiet, invisible servant." ~Mark Weiser
{"title":"Unobtrusive Air Leakage Estimation for Earables with In-ear Microphones","authors":"B. U. Demirel, Ting Dang, Khaldoon Al-Naimi, F. Kawsar, A. Montanari","doi":"10.1145/3631405","DOIUrl":"https://doi.org/10.1145/3631405","url":null,"abstract":"Earables (in-ear wearables) are gaining increasing attention for sensing applications and healthcare research thanks to their ergonomy and non-invasive nature. However, air leakages between the device and the user's ear, resulting from daily activities or wearing variabilities, can decrease the performance of applications, interfere with calibrations, and reduce the robustness of the overall system. Existing literature lacks established methods for estimating the degree of air leaks (i.e., seal integrity) to provide information for the earable applications. In this work, we proposed a novel unobtrusive method for estimating the air leakage level of earbuds based on an in-ear microphone. The proposed method aims to estimate the magnitude of distortions, reflections, and external noise in the ear canal while excluding the speaker output by learning the speaker-to-microphone transfer function which allows us to perform the task unobtrusively. Using the obtained residual signal in the ear canal, we extract three features and deploy a machine-learning model for estimating the air leakage level. We investigated our system under various conditions to validate its robustness and resilience against the motion and other artefacts. Our extensive experimental evaluation shows that the proposed method can track air leakage levels under different daily activities. \"The best computer is a quiet, invisible servant.\" ~Mark Weiser","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"13 6","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}