Youjin Sung, Rachel Kim, Kun Woo Song, Yitian Shao, Sang Ho Yoon
The emergence of vibrotactile feedback in hand wearables enables immersive virtual reality (VR) experience with whole-hand haptic rendering. However, existing haptic rendering neglects inconsistent sensations caused by hand postures. In our study, we observed that changing hand postures alters the distribution of vibrotactile signals which might degrade one's haptic perception. To address the issues, we present HapticPilot which allows an in-situ haptic experience design for hand wearables in VR. We developed an in-situ authoring system supporting instant haptic design. In the authoring tool, we applied our posture-adaptive haptic rendering algorithm with a novel haptic design abstraction called phantom grid. The algorithm adapts phantom grid to the target posture and incorporates 1D & 2D phantom sensation with a unique actuator arrangement to provide a whole-hand experience. With this method, HapticPilot provides a consistent haptic experience across various hand postures is available. Through measuring perceptual haptic performance and collecting qualitative feedback, we validated the usability of the system. In the end, we demonstrated our system with prospective VR scenarios showing how it enables an intuitive, empowering, and responsive haptic authoring framework.
{"title":"HapticPilot","authors":"Youjin Sung, Rachel Kim, Kun Woo Song, Yitian Shao, Sang Ho Yoon","doi":"10.1145/3631453","DOIUrl":"https://doi.org/10.1145/3631453","url":null,"abstract":"The emergence of vibrotactile feedback in hand wearables enables immersive virtual reality (VR) experience with whole-hand haptic rendering. However, existing haptic rendering neglects inconsistent sensations caused by hand postures. In our study, we observed that changing hand postures alters the distribution of vibrotactile signals which might degrade one's haptic perception. To address the issues, we present HapticPilot which allows an in-situ haptic experience design for hand wearables in VR. We developed an in-situ authoring system supporting instant haptic design. In the authoring tool, we applied our posture-adaptive haptic rendering algorithm with a novel haptic design abstraction called phantom grid. The algorithm adapts phantom grid to the target posture and incorporates 1D & 2D phantom sensation with a unique actuator arrangement to provide a whole-hand experience. With this method, HapticPilot provides a consistent haptic experience across various hand postures is available. Through measuring perceptual haptic performance and collecting qualitative feedback, we validated the usability of the system. In the end, we demonstrated our system with prospective VR scenarios showing how it enables an intuitive, empowering, and responsive haptic authoring framework.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A text editing solution that adapts to speech-unfriendly (inconvenient to speak or difficult to recognize speech) environments is essential for head-mounted displays (HMDs) to work universally. For existing schemes, e.g., touch bar, virtual keyboard and physical keyboard, there are shortcomings such as insufficient speed, uncomfortable experience or restrictions on user location and posture. To mitigate these restrictions, we propose TouchEditor, a novel text editing system for HMDs based on a flexible piezoresistive film sensor, supporting cursor positioning, text selection, text retyping and editing commands (i.e., Copy, Paste, Delete, etc.). Through literature overview and heuristic study, we design a pressure-controlled menu and a shortcut gesture set for entering editing commands, and propose an area-and-pressure-based method for cursor positioning and text selection that skillfully maps gestures in different areas and with different strengths to cursor movements with different directions and granularities. The evaluation results show that TouchEditor i) adapts to various contents and scenes well with a stable correction speed of 0.075 corrections per second; ii) achieves 95.4% gesture recognition accuracy; iii) reaches a considerable level with a mobile phone in text selection tasks. The comparison results with the speech-dependent EYEditor and the built-in touch bar further prove the flexibility and robustness of TouchEditor in speech-unfriendly environments.
{"title":"TouchEditor","authors":"Lishuang Zhan, Tianyang Xiong, Hongwei Zhang, Shihui Guo, Xiaowei Chen, Jiangtao Gong, Juncong Lin, Yipeng Qin","doi":"10.1145/3631454","DOIUrl":"https://doi.org/10.1145/3631454","url":null,"abstract":"A text editing solution that adapts to speech-unfriendly (inconvenient to speak or difficult to recognize speech) environments is essential for head-mounted displays (HMDs) to work universally. For existing schemes, e.g., touch bar, virtual keyboard and physical keyboard, there are shortcomings such as insufficient speed, uncomfortable experience or restrictions on user location and posture. To mitigate these restrictions, we propose TouchEditor, a novel text editing system for HMDs based on a flexible piezoresistive film sensor, supporting cursor positioning, text selection, text retyping and editing commands (i.e., Copy, Paste, Delete, etc.). Through literature overview and heuristic study, we design a pressure-controlled menu and a shortcut gesture set for entering editing commands, and propose an area-and-pressure-based method for cursor positioning and text selection that skillfully maps gestures in different areas and with different strengths to cursor movements with different directions and granularities. The evaluation results show that TouchEditor i) adapts to various contents and scenes well with a stable correction speed of 0.075 corrections per second; ii) achieves 95.4% gesture recognition accuracy; iii) reaches a considerable level with a mobile phone in text selection tasks. The comparison results with the speech-dependent EYEditor and the built-in touch bar further prove the flexibility and robustness of TouchEditor in speech-unfriendly environments.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Body temperature is an important vital sign which can indicate fever and is known to be correlated with activities such as eating, exercise and stress. However, continuous temperature monitoring poses a significant challenge. We present Thermal Earring, a first-of-its-kind smart earring that enables a reliable wearable solution for continuous temperature monitoring. The Thermal Earring takes advantage of the unique position of earrings in proximity to the head, a region with tight coupling to the body unlike watches and other wearables which are more loosely worn on extremities. We develop a hardware prototype in the form factor of real earrings measuring a maximum width of 11.3 mm and a length of 31 mm, weighing 335 mg, and consuming only 14.4 uW which enables a battery life of 28 days in real-world tests. We demonstrate this form factor is small and light enough to integrate into real jewelry with fashionable designs. Additionally, we develop a dual sensor design to differentiate human body temperature change from environmental changes. We explore the use of this novel sensing platform and find its measured earlobe temperatures are stable within ±0.32 °C during periods of rest. Using these promising results, we investigate its capability of detecting fever by gathering data from 5 febrile patients and 20 healthy participants. Further, we perform the first-ever investigation of the relationship between earlobe temperature and a variety of daily activities, demonstrating earlobe temperature changes related to eating and exercise. We also find the surprising result that acute stressors such as public speaking and exams cause measurable changes in earlobe temperature. We perform multi-day in-the-wild experiments and confirm the temperature changes caused by these daily activities in natural daily scenarios. This initial exploration seeks to provide a foundation for future automatic activity detection and earring-based wearables.
{"title":"Thermal Earring","authors":"Qiuyue Shirley Xue, Yujia Liu, Joseph Breda, Mastafa Springston, Vikram Iyer, Shwetak Patel","doi":"10.1145/3631440","DOIUrl":"https://doi.org/10.1145/3631440","url":null,"abstract":"Body temperature is an important vital sign which can indicate fever and is known to be correlated with activities such as eating, exercise and stress. However, continuous temperature monitoring poses a significant challenge. We present Thermal Earring, a first-of-its-kind smart earring that enables a reliable wearable solution for continuous temperature monitoring. The Thermal Earring takes advantage of the unique position of earrings in proximity to the head, a region with tight coupling to the body unlike watches and other wearables which are more loosely worn on extremities. We develop a hardware prototype in the form factor of real earrings measuring a maximum width of 11.3 mm and a length of 31 mm, weighing 335 mg, and consuming only 14.4 uW which enables a battery life of 28 days in real-world tests. We demonstrate this form factor is small and light enough to integrate into real jewelry with fashionable designs. Additionally, we develop a dual sensor design to differentiate human body temperature change from environmental changes. We explore the use of this novel sensing platform and find its measured earlobe temperatures are stable within ±0.32 °C during periods of rest. Using these promising results, we investigate its capability of detecting fever by gathering data from 5 febrile patients and 20 healthy participants. Further, we perform the first-ever investigation of the relationship between earlobe temperature and a variety of daily activities, demonstrating earlobe temperature changes related to eating and exercise. We also find the surprising result that acute stressors such as public speaking and exams cause measurable changes in earlobe temperature. We perform multi-day in-the-wild experiments and confirm the temperature changes caused by these daily activities in natural daily scenarios. This initial exploration seeks to provide a foundation for future automatic activity detection and earring-based wearables.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bill Yen, Laura Jaliff, Louis Gutierrez, Philothei Sahinidis, Sadie Bernstein, John Madden, Stephen Taylor, Colleen Josephson, Pat Pannuto, Weitao Shuai, George Wells, Nivedita Arora, Josiah D. Hester
Human-caused climate degradation and the explosion of electronic waste have pushed the computing community to explore fundamental alternatives to the current battery-powered, over-provisioned ubiquitous computing devices that need constant replacement and recharging. Soil Microbial Fuel Cells (SMFCs) offer promise as a renewable energy source that is biocompatible and viable in difficult environments where traditional batteries and solar panels fall short. However, SMFC development is in its infancy, and challenges like robustness to environmental factors and low power output stymie efforts to implement real-world applications in terrestrial environments. This work details a 2-year iterative process that uncovers barriers to practical SMFC design for powering electronics, which we address through a mechanistic understanding of SMFC theory from the literature. We present nine months of deployment data gathered from four SMFC experiments exploring cell geometries, resulting in an improved SMFC that generates power across a wider soil moisture range. From these experiments, we extracted key lessons and a testing framework, assessed SMFC's field performance, contextualized improvements with emerging and existing computing systems, and demonstrated the improved SMFC powering a wireless sensor for soil moisture and touch sensing. We contribute our data, methodology, and designs to establish the foundation for a sustainable, soil-powered future.
{"title":"Soil-Powered Computing","authors":"Bill Yen, Laura Jaliff, Louis Gutierrez, Philothei Sahinidis, Sadie Bernstein, John Madden, Stephen Taylor, Colleen Josephson, Pat Pannuto, Weitao Shuai, George Wells, Nivedita Arora, Josiah D. Hester","doi":"10.1145/3631410","DOIUrl":"https://doi.org/10.1145/3631410","url":null,"abstract":"Human-caused climate degradation and the explosion of electronic waste have pushed the computing community to explore fundamental alternatives to the current battery-powered, over-provisioned ubiquitous computing devices that need constant replacement and recharging. Soil Microbial Fuel Cells (SMFCs) offer promise as a renewable energy source that is biocompatible and viable in difficult environments where traditional batteries and solar panels fall short. However, SMFC development is in its infancy, and challenges like robustness to environmental factors and low power output stymie efforts to implement real-world applications in terrestrial environments. This work details a 2-year iterative process that uncovers barriers to practical SMFC design for powering electronics, which we address through a mechanistic understanding of SMFC theory from the literature. We present nine months of deployment data gathered from four SMFC experiments exploring cell geometries, resulting in an improved SMFC that generates power across a wider soil moisture range. From these experiments, we extracted key lessons and a testing framework, assessed SMFC's field performance, contextualized improvements with emerging and existing computing systems, and demonstrated the improved SMFC powering a wireless sensor for soil moisture and touch sensing. We contribute our data, methodology, and designs to establish the foundation for a sustainable, soil-powered future.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless sensing has demonstrated its potential of utilizing radio frequency (RF) signals to sense individuals and objects. Among different wireless signals, LoRa signal is particularly promising for through-wall sensing owing to its strong penetration capability. However, existing works view walls as a "bad" thing as they attenuate signal power and decrease the sensing coverage. In this paper, we show a counter-intuitive observation, i.e., walls can be used to increase the sensing coverage if the RF devices are placed properly with respect to walls. To fully understand the underlying principle behind this observation, we develop a through-wall sensing model to mathematically quantify the effect of walls. We further show that besides increasing the sensing coverage, we can also use the wall to help mitigate interference, which is one well-known issue in wireless sensing. We demonstrate the effect of wall through two representative applications, i.e., macro-level human walking sensing and micro-level human respiration monitoring. Comprehensive experiments show that by properly deploying the transmitter and receiver with respect to the wall, the coverage of human walking detection can be expanded by more than 160%. By leveraging the effect of wall to mitigate interference, we can sense the tiny respiration of target even in the presence of three interferers walking nearby.
{"title":"Wall Matters","authors":"Binbin Xie, Minhao Cui, Deepak Ganesan, Jie Xiong","doi":"10.1145/3631417","DOIUrl":"https://doi.org/10.1145/3631417","url":null,"abstract":"Wireless sensing has demonstrated its potential of utilizing radio frequency (RF) signals to sense individuals and objects. Among different wireless signals, LoRa signal is particularly promising for through-wall sensing owing to its strong penetration capability. However, existing works view walls as a \"bad\" thing as they attenuate signal power and decrease the sensing coverage. In this paper, we show a counter-intuitive observation, i.e., walls can be used to increase the sensing coverage if the RF devices are placed properly with respect to walls. To fully understand the underlying principle behind this observation, we develop a through-wall sensing model to mathematically quantify the effect of walls. We further show that besides increasing the sensing coverage, we can also use the wall to help mitigate interference, which is one well-known issue in wireless sensing. We demonstrate the effect of wall through two representative applications, i.e., macro-level human walking sensing and micro-level human respiration monitoring. Comprehensive experiments show that by properly deploying the transmitter and receiver with respect to the wall, the coverage of human walking detection can be expanded by more than 160%. By leveraging the effect of wall to mitigate interference, we can sense the tiny respiration of target even in the presence of three interferers walking nearby.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The widespread adoption of wearable devices has led to a surge in the development of multi-device wearable human activity recognition (WHAR) systems. Nevertheless, the performance of traditional supervised learning-based methods to WHAR is limited by the challenge of collecting ample annotated wearable data. To overcome this limitation, self-supervised learning (SSL) has emerged as a promising solution by first training a competent feature extractor on a substantial quantity of unlabeled data, followed by refining a minimal classifier with a small amount of labeled data. Despite the promise of SSL in WHAR, the majority of studies have not considered missing device scenarios in multi-device WHAR. To bridge this gap, we propose a multi-device SSL WHAR method termed Spatial-Temporal Masked Autoencoder (STMAE). STMAE captures discriminative activity representations by utilizing the asymmetrical encoder-decoder structure and two-stage spatial-temporal masking strategy, which can exploit the spatial-temporal correlations in multi-device data to improve the performance of SSL WHAR, especially on missing device scenarios. Experiments on four real-world datasets demonstrate the efficacy of STMAE in various practical scenarios.
{"title":"Spatial-Temporal Masked Autoencoder for Multi-Device Wearable Human Activity Recognition","authors":"Shenghuan Miao, Ling Chen, Rong Hu","doi":"10.1145/3631415","DOIUrl":"https://doi.org/10.1145/3631415","url":null,"abstract":"The widespread adoption of wearable devices has led to a surge in the development of multi-device wearable human activity recognition (WHAR) systems. Nevertheless, the performance of traditional supervised learning-based methods to WHAR is limited by the challenge of collecting ample annotated wearable data. To overcome this limitation, self-supervised learning (SSL) has emerged as a promising solution by first training a competent feature extractor on a substantial quantity of unlabeled data, followed by refining a minimal classifier with a small amount of labeled data. Despite the promise of SSL in WHAR, the majority of studies have not considered missing device scenarios in multi-device WHAR. To bridge this gap, we propose a multi-device SSL WHAR method termed Spatial-Temporal Masked Autoencoder (STMAE). STMAE captures discriminative activity representations by utilizing the asymmetrical encoder-decoder structure and two-stage spatial-temporal masking strategy, which can exploit the spatial-temporal correlations in multi-device data to improve the performance of SSL WHAR, especially on missing device scenarios. Experiments on four real-world datasets demonstrate the efficacy of STMAE in various practical scenarios.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federated Learning (FL) aims to improve machine learning privacy by allowing several data owners in edge and ubiquitous computing systems to collaboratively train a model, while preserving their local training data private, and sharing only model training parameters. However, FL systems remain vulnerable to privacy attacks, and in particular, to membership inference attacks that allow adversaries to determine whether a given data sample belongs to participants' training data, thus, raising a significant threat in sensitive ubiquitous computing systems. Indeed, membership inference attacks are based on a binary classifier that is able to differentiate between member data samples used to train a model and non-member data samples not used for training. In this context, several defense mechanisms, including differential privacy, have been proposed to counter such privacy attacks. However, the main drawback of these methods is that they may reduce model accuracy while incurring non-negligible computational costs. In this paper, we precisely address this problem with PASTEL, a FL privacy-preserving mechanism that is based on a novel multi-objective learning function. On the one hand, PASTEL decreases the generalization gap to reduce the difference between member data and non-member data, and on the other hand, PASTEL reduces model loss and leverages adaptive gradient descent optimization for preserving high model accuracy. Our experimental evaluations conducted on eight widely used datasets and five model architectures show that PASTEL significantly reduces membership inference attack success rates by up to -28%, reaching optimal privacy protection in most cases, with low to no perceptible impact on model accuracy.
{"title":"PASTEL","authors":"F. Elhattab, Sara Bouchenak, Cédric Boscher","doi":"10.1145/3633808","DOIUrl":"https://doi.org/10.1145/3633808","url":null,"abstract":"Federated Learning (FL) aims to improve machine learning privacy by allowing several data owners in edge and ubiquitous computing systems to collaboratively train a model, while preserving their local training data private, and sharing only model training parameters. However, FL systems remain vulnerable to privacy attacks, and in particular, to membership inference attacks that allow adversaries to determine whether a given data sample belongs to participants' training data, thus, raising a significant threat in sensitive ubiquitous computing systems. Indeed, membership inference attacks are based on a binary classifier that is able to differentiate between member data samples used to train a model and non-member data samples not used for training. In this context, several defense mechanisms, including differential privacy, have been proposed to counter such privacy attacks. However, the main drawback of these methods is that they may reduce model accuracy while incurring non-negligible computational costs. In this paper, we precisely address this problem with PASTEL, a FL privacy-preserving mechanism that is based on a novel multi-objective learning function. On the one hand, PASTEL decreases the generalization gap to reduce the difference between member data and non-member data, and on the other hand, PASTEL reduces model loss and leverages adaptive gradient descent optimization for preserving high model accuracy. Our experimental evaluations conducted on eight widely used datasets and five model architectures show that PASTEL significantly reduces membership inference attack success rates by up to -28%, reaching optimal privacy protection in most cases, with low to no perceptible impact on model accuracy.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yangyang Gu, Jing Chen, Cong Wu, Kun He, Ziming Zhao, Ruiying Du
Unlawful wireless cameras are often hidden to secretly monitor private activities. However, existing methods to detect and localize these cameras are interactively complex or require expensive specialized hardware. In this paper, we present LocCams, an efficient and robust approach for hidden camera detection and localization using only a commodity device (e.g., a smartphone). By analyzing data packets in the wireless local area network, LocCams passively detects hidden cameras based on the packet transmission rate. Camera localization is achieved by identifying whether the physical channel between our detector and the hidden camera is a Line-of-Sight (LOS) propagation path based on the distribution of channel state information subcarriers, and utilizing a feature extraction approach based on a Convolutional Neural Network (CNN) model for reliable localization. Our extensive experiments, involving various subjects, cameras, distances, user positions, and room configurations, demonstrate LocCams' effectiveness. Additionally, to evaluate the performance of the method in real life, we use subjects, cameras, and rooms that do not appear in the training set to evaluate the transferability of the model. With an overall accuracy of 95.12% within 30 seconds of detection, LocCams provides robust detection and localization of hidden cameras.
{"title":"LocCams","authors":"Yangyang Gu, Jing Chen, Cong Wu, Kun He, Ziming Zhao, Ruiying Du","doi":"10.1145/3631432","DOIUrl":"https://doi.org/10.1145/3631432","url":null,"abstract":"Unlawful wireless cameras are often hidden to secretly monitor private activities. However, existing methods to detect and localize these cameras are interactively complex or require expensive specialized hardware. In this paper, we present LocCams, an efficient and robust approach for hidden camera detection and localization using only a commodity device (e.g., a smartphone). By analyzing data packets in the wireless local area network, LocCams passively detects hidden cameras based on the packet transmission rate. Camera localization is achieved by identifying whether the physical channel between our detector and the hidden camera is a Line-of-Sight (LOS) propagation path based on the distribution of channel state information subcarriers, and utilizing a feature extraction approach based on a Convolutional Neural Network (CNN) model for reliable localization. Our extensive experiments, involving various subjects, cameras, distances, user positions, and room configurations, demonstrate LocCams' effectiveness. Additionally, to evaluate the performance of the method in real life, we use subjects, cameras, and rooms that do not appear in the training set to evaluate the transferability of the model. With an overall accuracy of 95.12% within 30 seconds of detection, LocCams provides robust detection and localization of hidden cameras.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human Activity Recognition (HAR) is a challenging, multi-label classification problem as activities may co-occur and sensor signals corresponding to the same activity may vary in different contexts (e.g., different device placements). This paper proposes a Deep Heterogeneous Contrastive Hyper-Graph Learning (DHC-HGL) framework that captures heterogenous Context-Aware HAR (CA-HAR) hypergraph properties in a message-passing and neighborhood-aggregation fashion. Prior work only explored homogeneous or shallow-node-heterogeneous graphs. DHC-HGL handles heterogeneous CA-HAR data by innovatively 1) Constructing three different types of sub-hypergraphs that are each passed through different custom HyperGraph Convolution (HGC) layers designed to handle edge-heterogeneity and 2) Adopting a contrastive loss function to ensure node-heterogeneity. In rigorous evaluation on two CA-HAR datasets, DHC-HGL significantly outperformed state-of-the-art baselines by 5.8% to 16.7% on Matthews Correlation Coefficient (MCC) and 3.0% to 8.4% on Macro F1 scores. UMAP visualizations of learned CA-HAR node embeddings are also presented to enhance model explainability. Our code is publicly available1 to encourage further research.
{"title":"Deep Heterogeneous Contrastive Hyper-Graph Learning for In-the-Wild Context-Aware Human Activity Recognition","authors":"Wen Ge, Guanyi Mou, Emmanuel O. Agu, Kyumin Lee","doi":"10.1145/3631444","DOIUrl":"https://doi.org/10.1145/3631444","url":null,"abstract":"Human Activity Recognition (HAR) is a challenging, multi-label classification problem as activities may co-occur and sensor signals corresponding to the same activity may vary in different contexts (e.g., different device placements). This paper proposes a Deep Heterogeneous Contrastive Hyper-Graph Learning (DHC-HGL) framework that captures heterogenous Context-Aware HAR (CA-HAR) hypergraph properties in a message-passing and neighborhood-aggregation fashion. Prior work only explored homogeneous or shallow-node-heterogeneous graphs. DHC-HGL handles heterogeneous CA-HAR data by innovatively 1) Constructing three different types of sub-hypergraphs that are each passed through different custom HyperGraph Convolution (HGC) layers designed to handle edge-heterogeneity and 2) Adopting a contrastive loss function to ensure node-heterogeneity. In rigorous evaluation on two CA-HAR datasets, DHC-HGL significantly outperformed state-of-the-art baselines by 5.8% to 16.7% on Matthews Correlation Coefficient (MCC) and 3.0% to 8.4% on Macro F1 scores. UMAP visualizations of learned CA-HAR node embeddings are also presented to enhance model explainability. Our code is publicly available1 to encourage further research.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart hospital patient rooms incorporate various smart devices to allow digital control of the entertainment --- such as TV and soundbar --- and the environment --- including lights, blinds, and thermostat. This technology can benefit patients by providing a more accessible, engaging, and personalized approach to their care. Many patients arrive at a rehabilitation hospital because they suffered a life-changing event such as a spinal cord injury or stroke. It can be challenging for patients to learn to cope with the changed abilities that are the new norm in their lives. This study explores ways smart patient rooms can support rehabilitation education to prepare patients for life outside the hospital's care. We conducted 20 contextual inquiries and four interviews with rehabilitation educators as they performed education sessions with patients and informal caregivers. Using thematic analysis, our findings offer insights into how smart patient rooms could revolutionize patient education by fostering better engagement with educational content, reducing interruptions during sessions, providing more agile education content management, and customizing therapy elements for each patient's unique needs. Lastly, we discuss design opportunities for future smart patient room implementations for a better educational experience in any healthcare context.
{"title":"Reenvisioning Patient Education with Smart Hospital Patient Rooms","authors":"Joshua Dawson, K. J. Phanich, Jason Wiese","doi":"10.1145/3631419","DOIUrl":"https://doi.org/10.1145/3631419","url":null,"abstract":"Smart hospital patient rooms incorporate various smart devices to allow digital control of the entertainment --- such as TV and soundbar --- and the environment --- including lights, blinds, and thermostat. This technology can benefit patients by providing a more accessible, engaging, and personalized approach to their care. Many patients arrive at a rehabilitation hospital because they suffered a life-changing event such as a spinal cord injury or stroke. It can be challenging for patients to learn to cope with the changed abilities that are the new norm in their lives. This study explores ways smart patient rooms can support rehabilitation education to prepare patients for life outside the hospital's care. We conducted 20 contextual inquiries and four interviews with rehabilitation educators as they performed education sessions with patients and informal caregivers. Using thematic analysis, our findings offer insights into how smart patient rooms could revolutionize patient education by fostering better engagement with educational content, reducing interruptions during sessions, providing more agile education content management, and customizing therapy elements for each patient's unique needs. Lastly, we discuss design opportunities for future smart patient room implementations for a better educational experience in any healthcare context.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}