Rongrong Wang, Rui Tan, Zhenyu Yan, Chris Xiaoxuan Lu
Identifying new sensing modalities for indoor localization is an interest of research. This paper studies powerline-induced alternating magnetic field (AMF) that fills the indoor space for the orientation-aware three-dimensional (3D) simultaneous localization and mapping (SLAM). While an existing study has adopted a uniaxial AMF sensor for SLAM in a plane surface, the design falls short of addressing the vector field nature of AMF and is therefore susceptible to sensor orientation variations. Moreover, although the higher spatial variability of AMF in comparison with indoor geomagnetism promotes location sensing resolution, extra SLAM algorithm designs are needed to achieve robustness to trajectory deviations from the constructed map. To address the above issues, we design a new triaxial AMF sensor and a new SLAM algorithm that constructs a 3D AMF intensity map regularized and augmented by a Gaussian process. The triaxial sensor's orientation estimation is free of the error accumulation problem faced by inertial sensing. From extensive evaluation in eight indoor environments, our AMF-based 3D SLAM achieves sub-1m to 3m median localization errors in spaces of up to 500 m2, sub-2° mean error in orientation sensing, and outperforms the SLAM systems based on Wi-Fi, geomagnetism, and uniaxial AMF by more than 30%.
为室内定位确定新的传感模式是一项令人感兴趣的研究。本文研究了电力线引起的交变磁场(AMF),该磁场可填充室内空间,用于方位感知三维(3D)同步定位和绘图(SLAM)。虽然现有研究采用单轴交变磁场传感器在平面上进行 SLAM,但该设计未能解决交变磁场的矢量场特性,因此容易受到传感器方向变化的影响。此外,虽然 AMF 与室内地磁相比具有更高的空间可变性,可提高位置传感分辨率,但仍需要额外的 SLAM 算法设计,以实现轨迹偏离构建地图的鲁棒性。为解决上述问题,我们设计了一种新的三轴 AMF 传感器和一种新的 SLAM 算法,该算法可构建由高斯过程正则化和增强的三维 AMF 强度图。三轴传感器的方位估计不存在惯性传感所面临的误差累积问题。通过在八个室内环境中进行广泛评估,我们基于 AMF 的三维 SLAM 在面积达 500 平方米的空间中实现了低于 1 米至 3 米的中值定位误差,在方位感应中实现了低于 2° 的平均误差,比基于 Wi-Fi、地磁和单轴 AMF 的 SLAM 系统优胜 30% 以上。
{"title":"Orientation-Aware 3D SLAM in Alternating Magnetic Field from Powerlines","authors":"Rongrong Wang, Rui Tan, Zhenyu Yan, Chris Xiaoxuan Lu","doi":"10.1145/3631446","DOIUrl":"https://doi.org/10.1145/3631446","url":null,"abstract":"Identifying new sensing modalities for indoor localization is an interest of research. This paper studies powerline-induced alternating magnetic field (AMF) that fills the indoor space for the orientation-aware three-dimensional (3D) simultaneous localization and mapping (SLAM). While an existing study has adopted a uniaxial AMF sensor for SLAM in a plane surface, the design falls short of addressing the vector field nature of AMF and is therefore susceptible to sensor orientation variations. Moreover, although the higher spatial variability of AMF in comparison with indoor geomagnetism promotes location sensing resolution, extra SLAM algorithm designs are needed to achieve robustness to trajectory deviations from the constructed map. To address the above issues, we design a new triaxial AMF sensor and a new SLAM algorithm that constructs a 3D AMF intensity map regularized and augmented by a Gaussian process. The triaxial sensor's orientation estimation is free of the error accumulation problem faced by inertial sensing. From extensive evaluation in eight indoor environments, our AMF-based 3D SLAM achieves sub-1m to 3m median localization errors in spaces of up to 500 m2, sub-2° mean error in orientation sensing, and outperforms the SLAM systems based on Wi-Fi, geomagnetism, and uniaxial AMF by more than 30%.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"15 8","pages":"1 - 25"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Youjin Sung, Rachel Kim, Kun Woo Song, Yitian Shao, Sang Ho Yoon
The emergence of vibrotactile feedback in hand wearables enables immersive virtual reality (VR) experience with whole-hand haptic rendering. However, existing haptic rendering neglects inconsistent sensations caused by hand postures. In our study, we observed that changing hand postures alters the distribution of vibrotactile signals which might degrade one's haptic perception. To address the issues, we present HapticPilot which allows an in-situ haptic experience design for hand wearables in VR. We developed an in-situ authoring system supporting instant haptic design. In the authoring tool, we applied our posture-adaptive haptic rendering algorithm with a novel haptic design abstraction called phantom grid. The algorithm adapts phantom grid to the target posture and incorporates 1D & 2D phantom sensation with a unique actuator arrangement to provide a whole-hand experience. With this method, HapticPilot provides a consistent haptic experience across various hand postures is available. Through measuring perceptual haptic performance and collecting qualitative feedback, we validated the usability of the system. In the end, we demonstrated our system with prospective VR scenarios showing how it enables an intuitive, empowering, and responsive haptic authoring framework.
{"title":"HapticPilot","authors":"Youjin Sung, Rachel Kim, Kun Woo Song, Yitian Shao, Sang Ho Yoon","doi":"10.1145/3631453","DOIUrl":"https://doi.org/10.1145/3631453","url":null,"abstract":"The emergence of vibrotactile feedback in hand wearables enables immersive virtual reality (VR) experience with whole-hand haptic rendering. However, existing haptic rendering neglects inconsistent sensations caused by hand postures. In our study, we observed that changing hand postures alters the distribution of vibrotactile signals which might degrade one's haptic perception. To address the issues, we present HapticPilot which allows an in-situ haptic experience design for hand wearables in VR. We developed an in-situ authoring system supporting instant haptic design. In the authoring tool, we applied our posture-adaptive haptic rendering algorithm with a novel haptic design abstraction called phantom grid. The algorithm adapts phantom grid to the target posture and incorporates 1D & 2D phantom sensation with a unique actuator arrangement to provide a whole-hand experience. With this method, HapticPilot provides a consistent haptic experience across various hand postures is available. Through measuring perceptual haptic performance and collecting qualitative feedback, we validated the usability of the system. In the end, we demonstrated our system with prospective VR scenarios showing how it enables an intuitive, empowering, and responsive haptic authoring framework.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"6 4","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A text editing solution that adapts to speech-unfriendly (inconvenient to speak or difficult to recognize speech) environments is essential for head-mounted displays (HMDs) to work universally. For existing schemes, e.g., touch bar, virtual keyboard and physical keyboard, there are shortcomings such as insufficient speed, uncomfortable experience or restrictions on user location and posture. To mitigate these restrictions, we propose TouchEditor, a novel text editing system for HMDs based on a flexible piezoresistive film sensor, supporting cursor positioning, text selection, text retyping and editing commands (i.e., Copy, Paste, Delete, etc.). Through literature overview and heuristic study, we design a pressure-controlled menu and a shortcut gesture set for entering editing commands, and propose an area-and-pressure-based method for cursor positioning and text selection that skillfully maps gestures in different areas and with different strengths to cursor movements with different directions and granularities. The evaluation results show that TouchEditor i) adapts to various contents and scenes well with a stable correction speed of 0.075 corrections per second; ii) achieves 95.4% gesture recognition accuracy; iii) reaches a considerable level with a mobile phone in text selection tasks. The comparison results with the speech-dependent EYEditor and the built-in touch bar further prove the flexibility and robustness of TouchEditor in speech-unfriendly environments.
{"title":"TouchEditor","authors":"Lishuang Zhan, Tianyang Xiong, Hongwei Zhang, Shihui Guo, Xiaowei Chen, Jiangtao Gong, Juncong Lin, Yipeng Qin","doi":"10.1145/3631454","DOIUrl":"https://doi.org/10.1145/3631454","url":null,"abstract":"A text editing solution that adapts to speech-unfriendly (inconvenient to speak or difficult to recognize speech) environments is essential for head-mounted displays (HMDs) to work universally. For existing schemes, e.g., touch bar, virtual keyboard and physical keyboard, there are shortcomings such as insufficient speed, uncomfortable experience or restrictions on user location and posture. To mitigate these restrictions, we propose TouchEditor, a novel text editing system for HMDs based on a flexible piezoresistive film sensor, supporting cursor positioning, text selection, text retyping and editing commands (i.e., Copy, Paste, Delete, etc.). Through literature overview and heuristic study, we design a pressure-controlled menu and a shortcut gesture set for entering editing commands, and propose an area-and-pressure-based method for cursor positioning and text selection that skillfully maps gestures in different areas and with different strengths to cursor movements with different directions and granularities. The evaluation results show that TouchEditor i) adapts to various contents and scenes well with a stable correction speed of 0.075 corrections per second; ii) achieves 95.4% gesture recognition accuracy; iii) reaches a considerable level with a mobile phone in text selection tasks. The comparison results with the speech-dependent EYEditor and the built-in touch bar further prove the flexibility and robustness of TouchEditor in speech-unfriendly environments.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"12 52","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Body temperature is an important vital sign which can indicate fever and is known to be correlated with activities such as eating, exercise and stress. However, continuous temperature monitoring poses a significant challenge. We present Thermal Earring, a first-of-its-kind smart earring that enables a reliable wearable solution for continuous temperature monitoring. The Thermal Earring takes advantage of the unique position of earrings in proximity to the head, a region with tight coupling to the body unlike watches and other wearables which are more loosely worn on extremities. We develop a hardware prototype in the form factor of real earrings measuring a maximum width of 11.3 mm and a length of 31 mm, weighing 335 mg, and consuming only 14.4 uW which enables a battery life of 28 days in real-world tests. We demonstrate this form factor is small and light enough to integrate into real jewelry with fashionable designs. Additionally, we develop a dual sensor design to differentiate human body temperature change from environmental changes. We explore the use of this novel sensing platform and find its measured earlobe temperatures are stable within ±0.32 °C during periods of rest. Using these promising results, we investigate its capability of detecting fever by gathering data from 5 febrile patients and 20 healthy participants. Further, we perform the first-ever investigation of the relationship between earlobe temperature and a variety of daily activities, demonstrating earlobe temperature changes related to eating and exercise. We also find the surprising result that acute stressors such as public speaking and exams cause measurable changes in earlobe temperature. We perform multi-day in-the-wild experiments and confirm the temperature changes caused by these daily activities in natural daily scenarios. This initial exploration seeks to provide a foundation for future automatic activity detection and earring-based wearables.
{"title":"Thermal Earring","authors":"Qiuyue Shirley Xue, Yujia Liu, Joseph Breda, Mastafa Springston, Vikram Iyer, Shwetak Patel","doi":"10.1145/3631440","DOIUrl":"https://doi.org/10.1145/3631440","url":null,"abstract":"Body temperature is an important vital sign which can indicate fever and is known to be correlated with activities such as eating, exercise and stress. However, continuous temperature monitoring poses a significant challenge. We present Thermal Earring, a first-of-its-kind smart earring that enables a reliable wearable solution for continuous temperature monitoring. The Thermal Earring takes advantage of the unique position of earrings in proximity to the head, a region with tight coupling to the body unlike watches and other wearables which are more loosely worn on extremities. We develop a hardware prototype in the form factor of real earrings measuring a maximum width of 11.3 mm and a length of 31 mm, weighing 335 mg, and consuming only 14.4 uW which enables a battery life of 28 days in real-world tests. We demonstrate this form factor is small and light enough to integrate into real jewelry with fashionable designs. Additionally, we develop a dual sensor design to differentiate human body temperature change from environmental changes. We explore the use of this novel sensing platform and find its measured earlobe temperatures are stable within ±0.32 °C during periods of rest. Using these promising results, we investigate its capability of detecting fever by gathering data from 5 febrile patients and 20 healthy participants. Further, we perform the first-ever investigation of the relationship between earlobe temperature and a variety of daily activities, demonstrating earlobe temperature changes related to eating and exercise. We also find the surprising result that acute stressors such as public speaking and exams cause measurable changes in earlobe temperature. We perform multi-day in-the-wild experiments and confirm the temperature changes caused by these daily activities in natural daily scenarios. This initial exploration seeks to provide a foundation for future automatic activity detection and earring-based wearables.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"4 11","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bill Yen, Laura Jaliff, Louis Gutierrez, Philothei Sahinidis, Sadie Bernstein, John Madden, Stephen Taylor, Colleen Josephson, Pat Pannuto, Weitao Shuai, George Wells, Nivedita Arora, Josiah D. Hester
Human-caused climate degradation and the explosion of electronic waste have pushed the computing community to explore fundamental alternatives to the current battery-powered, over-provisioned ubiquitous computing devices that need constant replacement and recharging. Soil Microbial Fuel Cells (SMFCs) offer promise as a renewable energy source that is biocompatible and viable in difficult environments where traditional batteries and solar panels fall short. However, SMFC development is in its infancy, and challenges like robustness to environmental factors and low power output stymie efforts to implement real-world applications in terrestrial environments. This work details a 2-year iterative process that uncovers barriers to practical SMFC design for powering electronics, which we address through a mechanistic understanding of SMFC theory from the literature. We present nine months of deployment data gathered from four SMFC experiments exploring cell geometries, resulting in an improved SMFC that generates power across a wider soil moisture range. From these experiments, we extracted key lessons and a testing framework, assessed SMFC's field performance, contextualized improvements with emerging and existing computing systems, and demonstrated the improved SMFC powering a wireless sensor for soil moisture and touch sensing. We contribute our data, methodology, and designs to establish the foundation for a sustainable, soil-powered future.
{"title":"Soil-Powered Computing","authors":"Bill Yen, Laura Jaliff, Louis Gutierrez, Philothei Sahinidis, Sadie Bernstein, John Madden, Stephen Taylor, Colleen Josephson, Pat Pannuto, Weitao Shuai, George Wells, Nivedita Arora, Josiah D. Hester","doi":"10.1145/3631410","DOIUrl":"https://doi.org/10.1145/3631410","url":null,"abstract":"Human-caused climate degradation and the explosion of electronic waste have pushed the computing community to explore fundamental alternatives to the current battery-powered, over-provisioned ubiquitous computing devices that need constant replacement and recharging. Soil Microbial Fuel Cells (SMFCs) offer promise as a renewable energy source that is biocompatible and viable in difficult environments where traditional batteries and solar panels fall short. However, SMFC development is in its infancy, and challenges like robustness to environmental factors and low power output stymie efforts to implement real-world applications in terrestrial environments. This work details a 2-year iterative process that uncovers barriers to practical SMFC design for powering electronics, which we address through a mechanistic understanding of SMFC theory from the literature. We present nine months of deployment data gathered from four SMFC experiments exploring cell geometries, resulting in an improved SMFC that generates power across a wider soil moisture range. From these experiments, we extracted key lessons and a testing framework, assessed SMFC's field performance, contextualized improvements with emerging and existing computing systems, and demonstrated the improved SMFC powering a wireless sensor for soil moisture and touch sensing. We contribute our data, methodology, and designs to establish the foundation for a sustainable, soil-powered future.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"12 7","pages":"1 - 40"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless sensing has demonstrated its potential of utilizing radio frequency (RF) signals to sense individuals and objects. Among different wireless signals, LoRa signal is particularly promising for through-wall sensing owing to its strong penetration capability. However, existing works view walls as a "bad" thing as they attenuate signal power and decrease the sensing coverage. In this paper, we show a counter-intuitive observation, i.e., walls can be used to increase the sensing coverage if the RF devices are placed properly with respect to walls. To fully understand the underlying principle behind this observation, we develop a through-wall sensing model to mathematically quantify the effect of walls. We further show that besides increasing the sensing coverage, we can also use the wall to help mitigate interference, which is one well-known issue in wireless sensing. We demonstrate the effect of wall through two representative applications, i.e., macro-level human walking sensing and micro-level human respiration monitoring. Comprehensive experiments show that by properly deploying the transmitter and receiver with respect to the wall, the coverage of human walking detection can be expanded by more than 160%. By leveraging the effect of wall to mitigate interference, we can sense the tiny respiration of target even in the presence of three interferers walking nearby.
{"title":"Wall Matters","authors":"Binbin Xie, Minhao Cui, Deepak Ganesan, Jie Xiong","doi":"10.1145/3631417","DOIUrl":"https://doi.org/10.1145/3631417","url":null,"abstract":"Wireless sensing has demonstrated its potential of utilizing radio frequency (RF) signals to sense individuals and objects. Among different wireless signals, LoRa signal is particularly promising for through-wall sensing owing to its strong penetration capability. However, existing works view walls as a \"bad\" thing as they attenuate signal power and decrease the sensing coverage. In this paper, we show a counter-intuitive observation, i.e., walls can be used to increase the sensing coverage if the RF devices are placed properly with respect to walls. To fully understand the underlying principle behind this observation, we develop a through-wall sensing model to mathematically quantify the effect of walls. We further show that besides increasing the sensing coverage, we can also use the wall to help mitigate interference, which is one well-known issue in wireless sensing. We demonstrate the effect of wall through two representative applications, i.e., macro-level human walking sensing and micro-level human respiration monitoring. Comprehensive experiments show that by properly deploying the transmitter and receiver with respect to the wall, the coverage of human walking detection can be expanded by more than 160%. By leveraging the effect of wall to mitigate interference, we can sense the tiny respiration of target even in the presence of three interferers walking nearby.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"2 4","pages":"1 - 22"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The widespread adoption of wearable devices has led to a surge in the development of multi-device wearable human activity recognition (WHAR) systems. Nevertheless, the performance of traditional supervised learning-based methods to WHAR is limited by the challenge of collecting ample annotated wearable data. To overcome this limitation, self-supervised learning (SSL) has emerged as a promising solution by first training a competent feature extractor on a substantial quantity of unlabeled data, followed by refining a minimal classifier with a small amount of labeled data. Despite the promise of SSL in WHAR, the majority of studies have not considered missing device scenarios in multi-device WHAR. To bridge this gap, we propose a multi-device SSL WHAR method termed Spatial-Temporal Masked Autoencoder (STMAE). STMAE captures discriminative activity representations by utilizing the asymmetrical encoder-decoder structure and two-stage spatial-temporal masking strategy, which can exploit the spatial-temporal correlations in multi-device data to improve the performance of SSL WHAR, especially on missing device scenarios. Experiments on four real-world datasets demonstrate the efficacy of STMAE in various practical scenarios.
{"title":"Spatial-Temporal Masked Autoencoder for Multi-Device Wearable Human Activity Recognition","authors":"Shenghuan Miao, Ling Chen, Rong Hu","doi":"10.1145/3631415","DOIUrl":"https://doi.org/10.1145/3631415","url":null,"abstract":"The widespread adoption of wearable devices has led to a surge in the development of multi-device wearable human activity recognition (WHAR) systems. Nevertheless, the performance of traditional supervised learning-based methods to WHAR is limited by the challenge of collecting ample annotated wearable data. To overcome this limitation, self-supervised learning (SSL) has emerged as a promising solution by first training a competent feature extractor on a substantial quantity of unlabeled data, followed by refining a minimal classifier with a small amount of labeled data. Despite the promise of SSL in WHAR, the majority of studies have not considered missing device scenarios in multi-device WHAR. To bridge this gap, we propose a multi-device SSL WHAR method termed Spatial-Temporal Masked Autoencoder (STMAE). STMAE captures discriminative activity representations by utilizing the asymmetrical encoder-decoder structure and two-stage spatial-temporal masking strategy, which can exploit the spatial-temporal correlations in multi-device data to improve the performance of SSL WHAR, especially on missing device scenarios. Experiments on four real-world datasets demonstrate the efficacy of STMAE in various practical scenarios.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"3 3","pages":"1 - 25"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federated Learning (FL) aims to improve machine learning privacy by allowing several data owners in edge and ubiquitous computing systems to collaboratively train a model, while preserving their local training data private, and sharing only model training parameters. However, FL systems remain vulnerable to privacy attacks, and in particular, to membership inference attacks that allow adversaries to determine whether a given data sample belongs to participants' training data, thus, raising a significant threat in sensitive ubiquitous computing systems. Indeed, membership inference attacks are based on a binary classifier that is able to differentiate between member data samples used to train a model and non-member data samples not used for training. In this context, several defense mechanisms, including differential privacy, have been proposed to counter such privacy attacks. However, the main drawback of these methods is that they may reduce model accuracy while incurring non-negligible computational costs. In this paper, we precisely address this problem with PASTEL, a FL privacy-preserving mechanism that is based on a novel multi-objective learning function. On the one hand, PASTEL decreases the generalization gap to reduce the difference between member data and non-member data, and on the other hand, PASTEL reduces model loss and leverages adaptive gradient descent optimization for preserving high model accuracy. Our experimental evaluations conducted on eight widely used datasets and five model architectures show that PASTEL significantly reduces membership inference attack success rates by up to -28%, reaching optimal privacy protection in most cases, with low to no perceptible impact on model accuracy.
{"title":"PASTEL","authors":"F. Elhattab, Sara Bouchenak, Cédric Boscher","doi":"10.1145/3633808","DOIUrl":"https://doi.org/10.1145/3633808","url":null,"abstract":"Federated Learning (FL) aims to improve machine learning privacy by allowing several data owners in edge and ubiquitous computing systems to collaboratively train a model, while preserving their local training data private, and sharing only model training parameters. However, FL systems remain vulnerable to privacy attacks, and in particular, to membership inference attacks that allow adversaries to determine whether a given data sample belongs to participants' training data, thus, raising a significant threat in sensitive ubiquitous computing systems. Indeed, membership inference attacks are based on a binary classifier that is able to differentiate between member data samples used to train a model and non-member data samples not used for training. In this context, several defense mechanisms, including differential privacy, have been proposed to counter such privacy attacks. However, the main drawback of these methods is that they may reduce model accuracy while incurring non-negligible computational costs. In this paper, we precisely address this problem with PASTEL, a FL privacy-preserving mechanism that is based on a novel multi-objective learning function. On the one hand, PASTEL decreases the generalization gap to reduce the difference between member data and non-member data, and on the other hand, PASTEL reduces model loss and leverages adaptive gradient descent optimization for preserving high model accuracy. Our experimental evaluations conducted on eight widely used datasets and five model architectures show that PASTEL significantly reduces membership inference attack success rates by up to -28%, reaching optimal privacy protection in most cases, with low to no perceptible impact on model accuracy.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"14 1","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yangyang Gu, Jing Chen, Cong Wu, Kun He, Ziming Zhao, Ruiying Du
Unlawful wireless cameras are often hidden to secretly monitor private activities. However, existing methods to detect and localize these cameras are interactively complex or require expensive specialized hardware. In this paper, we present LocCams, an efficient and robust approach for hidden camera detection and localization using only a commodity device (e.g., a smartphone). By analyzing data packets in the wireless local area network, LocCams passively detects hidden cameras based on the packet transmission rate. Camera localization is achieved by identifying whether the physical channel between our detector and the hidden camera is a Line-of-Sight (LOS) propagation path based on the distribution of channel state information subcarriers, and utilizing a feature extraction approach based on a Convolutional Neural Network (CNN) model for reliable localization. Our extensive experiments, involving various subjects, cameras, distances, user positions, and room configurations, demonstrate LocCams' effectiveness. Additionally, to evaluate the performance of the method in real life, we use subjects, cameras, and rooms that do not appear in the training set to evaluate the transferability of the model. With an overall accuracy of 95.12% within 30 seconds of detection, LocCams provides robust detection and localization of hidden cameras.
{"title":"LocCams","authors":"Yangyang Gu, Jing Chen, Cong Wu, Kun He, Ziming Zhao, Ruiying Du","doi":"10.1145/3631432","DOIUrl":"https://doi.org/10.1145/3631432","url":null,"abstract":"Unlawful wireless cameras are often hidden to secretly monitor private activities. However, existing methods to detect and localize these cameras are interactively complex or require expensive specialized hardware. In this paper, we present LocCams, an efficient and robust approach for hidden camera detection and localization using only a commodity device (e.g., a smartphone). By analyzing data packets in the wireless local area network, LocCams passively detects hidden cameras based on the packet transmission rate. Camera localization is achieved by identifying whether the physical channel between our detector and the hidden camera is a Line-of-Sight (LOS) propagation path based on the distribution of channel state information subcarriers, and utilizing a feature extraction approach based on a Convolutional Neural Network (CNN) model for reliable localization. Our extensive experiments, involving various subjects, cameras, distances, user positions, and room configurations, demonstrate LocCams' effectiveness. Additionally, to evaluate the performance of the method in real life, we use subjects, cameras, and rooms that do not appear in the training set to evaluate the transferability of the model. With an overall accuracy of 95.12% within 30 seconds of detection, LocCams provides robust detection and localization of hidden cameras.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"13 2","pages":"1 - 24"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human Activity Recognition (HAR) is a challenging, multi-label classification problem as activities may co-occur and sensor signals corresponding to the same activity may vary in different contexts (e.g., different device placements). This paper proposes a Deep Heterogeneous Contrastive Hyper-Graph Learning (DHC-HGL) framework that captures heterogenous Context-Aware HAR (CA-HAR) hypergraph properties in a message-passing and neighborhood-aggregation fashion. Prior work only explored homogeneous or shallow-node-heterogeneous graphs. DHC-HGL handles heterogeneous CA-HAR data by innovatively 1) Constructing three different types of sub-hypergraphs that are each passed through different custom HyperGraph Convolution (HGC) layers designed to handle edge-heterogeneity and 2) Adopting a contrastive loss function to ensure node-heterogeneity. In rigorous evaluation on two CA-HAR datasets, DHC-HGL significantly outperformed state-of-the-art baselines by 5.8% to 16.7% on Matthews Correlation Coefficient (MCC) and 3.0% to 8.4% on Macro F1 scores. UMAP visualizations of learned CA-HAR node embeddings are also presented to enhance model explainability. Our code is publicly available1 to encourage further research.
{"title":"Deep Heterogeneous Contrastive Hyper-Graph Learning for In-the-Wild Context-Aware Human Activity Recognition","authors":"Wen Ge, Guanyi Mou, Emmanuel O. Agu, Kyumin Lee","doi":"10.1145/3631444","DOIUrl":"https://doi.org/10.1145/3631444","url":null,"abstract":"Human Activity Recognition (HAR) is a challenging, multi-label classification problem as activities may co-occur and sensor signals corresponding to the same activity may vary in different contexts (e.g., different device placements). This paper proposes a Deep Heterogeneous Contrastive Hyper-Graph Learning (DHC-HGL) framework that captures heterogenous Context-Aware HAR (CA-HAR) hypergraph properties in a message-passing and neighborhood-aggregation fashion. Prior work only explored homogeneous or shallow-node-heterogeneous graphs. DHC-HGL handles heterogeneous CA-HAR data by innovatively 1) Constructing three different types of sub-hypergraphs that are each passed through different custom HyperGraph Convolution (HGC) layers designed to handle edge-heterogeneity and 2) Adopting a contrastive loss function to ensure node-heterogeneity. In rigorous evaluation on two CA-HAR datasets, DHC-HGL significantly outperformed state-of-the-art baselines by 5.8% to 16.7% on Matthews Correlation Coefficient (MCC) and 3.0% to 8.4% on Macro F1 scores. UMAP visualizations of learned CA-HAR node embeddings are also presented to enhance model explainability. Our code is publicly available1 to encourage further research.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"12 34","pages":"1 - 23"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}