Pub Date : 2024-09-01Epub Date: 2024-09-09DOI: 10.1145/3678584
H A LE, Rithika Lakshminarayanan, Jixin Li, Varun Mishra, Stephen Intille
μEMA is a data collection method that prompts research participants with quick, answer-at-a-glance, single-multiple-choice self-report behavioral questions, thus enabling high-temporal-density self-report of up to four times per hour when implemented on a smartwatch. However, due to the small watch screen, μEMA is better used to select among 2 to 5 multiple-choice answers versus allowing the collection of open-ended responses. We introduce an alternative and novel form of micro-interaction self-report using speech input - audio-μEMA- where a short beep or vibration cues participants to verbally report their behavioral states, allowing for open-ended, temporally dense self-reports. We conducted a one-hour usability study followed by a within-subject, 6-day to 21-day free-living feasibility study in which participants self-reported their physical activities and postures once every 2 to 5 minutes. We qualitatively explored the usability of the system and identified factors impacting the response rates of this data collection method. Despite being interrupted 12 to 20 times per hour, participants in the free-living study were highly engaged with the system, with an average response rate of 67.7% for audio-μEMA for up to 14 days. We discuss the factors that impacted feasibility; some implementation, methodological, and participant challenges we observed; and important considerations relevant to deploying audio-μEMA in real-time activity recognition systems.
μEMA是一种数据收集方法,它向研究参与者提出快速、一眼就能回答的单选多项自我报告行为问题,从而在智能手表上实现每小时多达4次的高时间密度自我报告。然而,由于手表屏幕较小,μEMA更适合在2至5个选择题中进行选择,而不是允许收集开放式答案。我们介绍了一种使用语音输入的微交互自我报告的替代和新颖形式-音频μ ema -其中一个短的蜂鸣声或振动提示参与者口头报告他们的行为状态,允许开放式,时间密集的自我报告。我们进行了一小时的可用性研究,随后进行了一项为期6天至21天的自由生活可行性研究,参与者每2至5分钟自我报告一次他们的身体活动和姿势。我们定性地探讨了系统的可用性,并确定了影响这种数据收集方法的回复率的因素。尽管每小时被打断12到20次,但自由生活研究的参与者对该系统的参与度很高,音频μ ema的平均响应率为67.7%,持续时间长达14天。我们讨论了影响可行性的因素;我们观察到一些实施、方法和参与者方面的挑战;以及在实时活动识别系统中部署音频μ ema的重要考虑事项。
{"title":"Collecting Self-reported Physical Activity and Posture Data Using Audio-based Ecological Momentary Assessment.","authors":"H A LE, Rithika Lakshminarayanan, Jixin Li, Varun Mishra, Stephen Intille","doi":"10.1145/3678584","DOIUrl":"10.1145/3678584","url":null,"abstract":"<p><p><i>μ</i>EMA is a data collection method that prompts research participants with quick, answer-at-a-glance, single-multiple-choice self-report behavioral questions, thus enabling high-temporal-density self-report of up to four times per hour when implemented on a smartwatch. However, due to the small watch screen, <i>μ</i>EMA is better used to select among 2 to 5 multiple-choice answers versus allowing the collection of open-ended responses. We introduce an alternative and novel form of micro-interaction self-report using speech input - audio-<i>μ</i>EMA- where a short beep or vibration cues participants to verbally report their behavioral states, allowing for open-ended, <i>temporally dense</i> self-reports. We conducted a one-hour usability study followed by a within-subject, 6-day to 21-day free-living feasibility study in which participants self-reported their physical activities and postures once every 2 to 5 minutes. We qualitatively explored the usability of the system and identified factors impacting the response rates of this data collection method. Despite being interrupted 12 to 20 times per hour, participants in the free-living study were highly engaged with the system, with an average response rate of 67.7% for audio-<i>μ</i>EMA for up to 14 days. We discuss the factors that impacted feasibility; some implementation, methodological, and participant challenges we observed; and important considerations relevant to deploying audio-<i>μ</i>EMA in real-time activity recognition systems.</p>","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"8 3","pages":""},"PeriodicalIF":4.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12573594/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145432208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2024-03-06DOI: 10.1145/3643540
Xuhai Xu, Bingsheng Yao, Yuanzhe Dong, Saadia Gabriel, Hong Yu, James Hendler, Marzyeh Ghassemi, Anind K Dey, Dakuo Wang
Advances in large language models (LLMs) have empowered a variety of applications. However, there is still a significant gap in research when it comes to understanding and enhancing the capabilities of LLMs in the field of mental health. In this work, we present a comprehensive evaluation of multiple LLMs on various mental health prediction tasks via online text data, including Alpaca, Alpaca-LoRA, FLAN-T5, GPT-3.5, and GPT-4. We conduct a broad range of experiments, covering zero-shot prompting, few-shot prompting, and instruction fine-tuning. The results indicate a promising yet limited performance of LLMs with zero-shot and few-shot prompt designs for mental health tasks. More importantly, our experiments show that instruction finetuning can significantly boost the performance of LLMs for all tasks simultaneously. Our best-finetuned models, Mental-Alpaca and Mental-FLAN-T5, outperform the best prompt design of GPT-3.5 (25 and 15 times bigger) by 10.9% on balanced accuracy and the best of GPT-4 (250 and 150 times bigger) by 4.8%. They further perform on par with the state-of-the-art task-specific language model. We also conduct an exploratory case study on LLMs' capability on mental health reasoning tasks, illustrating the promising capability of certain models such as GPT-4. We summarize our findings into a set of action guidelines for potential methods to enhance LLMs' capability for mental health tasks. Meanwhile, we also emphasize the important limitations before achieving deployability in real-world mental health settings, such as known racial and gender bias. We highlight the important ethical risks accompanying this line of research.
{"title":"Mental-LLM: Leveraging Large Language Models for Mental Health Prediction via Online Text Data.","authors":"Xuhai Xu, Bingsheng Yao, Yuanzhe Dong, Saadia Gabriel, Hong Yu, James Hendler, Marzyeh Ghassemi, Anind K Dey, Dakuo Wang","doi":"10.1145/3643540","DOIUrl":"10.1145/3643540","url":null,"abstract":"<p><p>Advances in large language models (LLMs) have empowered a variety of applications. However, there is still a significant gap in research when it comes to understanding and enhancing the capabilities of LLMs in the field of mental health. In this work, we present a comprehensive evaluation of multiple LLMs on various mental health prediction tasks via online text data, including Alpaca, Alpaca-LoRA, FLAN-T5, GPT-3.5, and GPT-4. We conduct a broad range of experiments, covering zero-shot prompting, few-shot prompting, and instruction fine-tuning. The results indicate a promising yet limited performance of LLMs with zero-shot and few-shot prompt designs for mental health tasks. More importantly, our experiments show that instruction finetuning can significantly boost the performance of LLMs for all tasks simultaneously. Our best-finetuned models, Mental-Alpaca and Mental-FLAN-T5, outperform the best prompt design of GPT-3.5 (25 and 15 times bigger) by 10.9% on balanced accuracy and the best of GPT-4 (250 and 150 times bigger) by 4.8%. They further perform on par with the state-of-the-art task-specific language model. We also conduct an exploratory case study on LLMs' capability on mental health reasoning tasks, illustrating the promising capability of certain models such as GPT-4. We summarize our findings into a set of action guidelines for potential methods to enhance LLMs' capability for mental health tasks. Meanwhile, we also emphasize the important limitations before achieving deployability in real-world mental health settings, such as known racial and gender bias. We highlight the important ethical risks accompanying this line of research.</p>","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"8 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11806945/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143383272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rongrong Wang, Rui Tan, Zhenyu Yan, Chris Xiaoxuan Lu
Identifying new sensing modalities for indoor localization is an interest of research. This paper studies powerline-induced alternating magnetic field (AMF) that fills the indoor space for the orientation-aware three-dimensional (3D) simultaneous localization and mapping (SLAM). While an existing study has adopted a uniaxial AMF sensor for SLAM in a plane surface, the design falls short of addressing the vector field nature of AMF and is therefore susceptible to sensor orientation variations. Moreover, although the higher spatial variability of AMF in comparison with indoor geomagnetism promotes location sensing resolution, extra SLAM algorithm designs are needed to achieve robustness to trajectory deviations from the constructed map. To address the above issues, we design a new triaxial AMF sensor and a new SLAM algorithm that constructs a 3D AMF intensity map regularized and augmented by a Gaussian process. The triaxial sensor's orientation estimation is free of the error accumulation problem faced by inertial sensing. From extensive evaluation in eight indoor environments, our AMF-based 3D SLAM achieves sub-1m to 3m median localization errors in spaces of up to 500 m2, sub-2° mean error in orientation sensing, and outperforms the SLAM systems based on Wi-Fi, geomagnetism, and uniaxial AMF by more than 30%.
为室内定位确定新的传感模式是一项令人感兴趣的研究。本文研究了电力线引起的交变磁场(AMF),该磁场可填充室内空间,用于方位感知三维(3D)同步定位和绘图(SLAM)。虽然现有研究采用单轴交变磁场传感器在平面上进行 SLAM,但该设计未能解决交变磁场的矢量场特性,因此容易受到传感器方向变化的影响。此外,虽然 AMF 与室内地磁相比具有更高的空间可变性,可提高位置传感分辨率,但仍需要额外的 SLAM 算法设计,以实现轨迹偏离构建地图的鲁棒性。为解决上述问题,我们设计了一种新的三轴 AMF 传感器和一种新的 SLAM 算法,该算法可构建由高斯过程正则化和增强的三维 AMF 强度图。三轴传感器的方位估计不存在惯性传感所面临的误差累积问题。通过在八个室内环境中进行广泛评估,我们基于 AMF 的三维 SLAM 在面积达 500 平方米的空间中实现了低于 1 米至 3 米的中值定位误差,在方位感应中实现了低于 2° 的平均误差,比基于 Wi-Fi、地磁和单轴 AMF 的 SLAM 系统优胜 30% 以上。
{"title":"Orientation-Aware 3D SLAM in Alternating Magnetic Field from Powerlines","authors":"Rongrong Wang, Rui Tan, Zhenyu Yan, Chris Xiaoxuan Lu","doi":"10.1145/3631446","DOIUrl":"https://doi.org/10.1145/3631446","url":null,"abstract":"Identifying new sensing modalities for indoor localization is an interest of research. This paper studies powerline-induced alternating magnetic field (AMF) that fills the indoor space for the orientation-aware three-dimensional (3D) simultaneous localization and mapping (SLAM). While an existing study has adopted a uniaxial AMF sensor for SLAM in a plane surface, the design falls short of addressing the vector field nature of AMF and is therefore susceptible to sensor orientation variations. Moreover, although the higher spatial variability of AMF in comparison with indoor geomagnetism promotes location sensing resolution, extra SLAM algorithm designs are needed to achieve robustness to trajectory deviations from the constructed map. To address the above issues, we design a new triaxial AMF sensor and a new SLAM algorithm that constructs a 3D AMF intensity map regularized and augmented by a Gaussian process. The triaxial sensor's orientation estimation is free of the error accumulation problem faced by inertial sensing. From extensive evaluation in eight indoor environments, our AMF-based 3D SLAM achieves sub-1m to 3m median localization errors in spaces of up to 500 m2, sub-2° mean error in orientation sensing, and outperforms the SLAM systems based on Wi-Fi, geomagnetism, and uniaxial AMF by more than 30%.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"15 8","pages":"1 - 25"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Youjin Sung, Rachel Kim, Kun Woo Song, Yitian Shao, Sang Ho Yoon
The emergence of vibrotactile feedback in hand wearables enables immersive virtual reality (VR) experience with whole-hand haptic rendering. However, existing haptic rendering neglects inconsistent sensations caused by hand postures. In our study, we observed that changing hand postures alters the distribution of vibrotactile signals which might degrade one's haptic perception. To address the issues, we present HapticPilot which allows an in-situ haptic experience design for hand wearables in VR. We developed an in-situ authoring system supporting instant haptic design. In the authoring tool, we applied our posture-adaptive haptic rendering algorithm with a novel haptic design abstraction called phantom grid. The algorithm adapts phantom grid to the target posture and incorporates 1D & 2D phantom sensation with a unique actuator arrangement to provide a whole-hand experience. With this method, HapticPilot provides a consistent haptic experience across various hand postures is available. Through measuring perceptual haptic performance and collecting qualitative feedback, we validated the usability of the system. In the end, we demonstrated our system with prospective VR scenarios showing how it enables an intuitive, empowering, and responsive haptic authoring framework.
{"title":"HapticPilot","authors":"Youjin Sung, Rachel Kim, Kun Woo Song, Yitian Shao, Sang Ho Yoon","doi":"10.1145/3631453","DOIUrl":"https://doi.org/10.1145/3631453","url":null,"abstract":"The emergence of vibrotactile feedback in hand wearables enables immersive virtual reality (VR) experience with whole-hand haptic rendering. However, existing haptic rendering neglects inconsistent sensations caused by hand postures. In our study, we observed that changing hand postures alters the distribution of vibrotactile signals which might degrade one's haptic perception. To address the issues, we present HapticPilot which allows an in-situ haptic experience design for hand wearables in VR. We developed an in-situ authoring system supporting instant haptic design. In the authoring tool, we applied our posture-adaptive haptic rendering algorithm with a novel haptic design abstraction called phantom grid. The algorithm adapts phantom grid to the target posture and incorporates 1D & 2D phantom sensation with a unique actuator arrangement to provide a whole-hand experience. With this method, HapticPilot provides a consistent haptic experience across various hand postures is available. Through measuring perceptual haptic performance and collecting qualitative feedback, we validated the usability of the system. In the end, we demonstrated our system with prospective VR scenarios showing how it enables an intuitive, empowering, and responsive haptic authoring framework.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"6 4","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A text editing solution that adapts to speech-unfriendly (inconvenient to speak or difficult to recognize speech) environments is essential for head-mounted displays (HMDs) to work universally. For existing schemes, e.g., touch bar, virtual keyboard and physical keyboard, there are shortcomings such as insufficient speed, uncomfortable experience or restrictions on user location and posture. To mitigate these restrictions, we propose TouchEditor, a novel text editing system for HMDs based on a flexible piezoresistive film sensor, supporting cursor positioning, text selection, text retyping and editing commands (i.e., Copy, Paste, Delete, etc.). Through literature overview and heuristic study, we design a pressure-controlled menu and a shortcut gesture set for entering editing commands, and propose an area-and-pressure-based method for cursor positioning and text selection that skillfully maps gestures in different areas and with different strengths to cursor movements with different directions and granularities. The evaluation results show that TouchEditor i) adapts to various contents and scenes well with a stable correction speed of 0.075 corrections per second; ii) achieves 95.4% gesture recognition accuracy; iii) reaches a considerable level with a mobile phone in text selection tasks. The comparison results with the speech-dependent EYEditor and the built-in touch bar further prove the flexibility and robustness of TouchEditor in speech-unfriendly environments.
{"title":"TouchEditor","authors":"Lishuang Zhan, Tianyang Xiong, Hongwei Zhang, Shihui Guo, Xiaowei Chen, Jiangtao Gong, Juncong Lin, Yipeng Qin","doi":"10.1145/3631454","DOIUrl":"https://doi.org/10.1145/3631454","url":null,"abstract":"A text editing solution that adapts to speech-unfriendly (inconvenient to speak or difficult to recognize speech) environments is essential for head-mounted displays (HMDs) to work universally. For existing schemes, e.g., touch bar, virtual keyboard and physical keyboard, there are shortcomings such as insufficient speed, uncomfortable experience or restrictions on user location and posture. To mitigate these restrictions, we propose TouchEditor, a novel text editing system for HMDs based on a flexible piezoresistive film sensor, supporting cursor positioning, text selection, text retyping and editing commands (i.e., Copy, Paste, Delete, etc.). Through literature overview and heuristic study, we design a pressure-controlled menu and a shortcut gesture set for entering editing commands, and propose an area-and-pressure-based method for cursor positioning and text selection that skillfully maps gestures in different areas and with different strengths to cursor movements with different directions and granularities. The evaluation results show that TouchEditor i) adapts to various contents and scenes well with a stable correction speed of 0.075 corrections per second; ii) achieves 95.4% gesture recognition accuracy; iii) reaches a considerable level with a mobile phone in text selection tasks. The comparison results with the speech-dependent EYEditor and the built-in touch bar further prove the flexibility and robustness of TouchEditor in speech-unfriendly environments.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"12 52","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Body temperature is an important vital sign which can indicate fever and is known to be correlated with activities such as eating, exercise and stress. However, continuous temperature monitoring poses a significant challenge. We present Thermal Earring, a first-of-its-kind smart earring that enables a reliable wearable solution for continuous temperature monitoring. The Thermal Earring takes advantage of the unique position of earrings in proximity to the head, a region with tight coupling to the body unlike watches and other wearables which are more loosely worn on extremities. We develop a hardware prototype in the form factor of real earrings measuring a maximum width of 11.3 mm and a length of 31 mm, weighing 335 mg, and consuming only 14.4 uW which enables a battery life of 28 days in real-world tests. We demonstrate this form factor is small and light enough to integrate into real jewelry with fashionable designs. Additionally, we develop a dual sensor design to differentiate human body temperature change from environmental changes. We explore the use of this novel sensing platform and find its measured earlobe temperatures are stable within ±0.32 °C during periods of rest. Using these promising results, we investigate its capability of detecting fever by gathering data from 5 febrile patients and 20 healthy participants. Further, we perform the first-ever investigation of the relationship between earlobe temperature and a variety of daily activities, demonstrating earlobe temperature changes related to eating and exercise. We also find the surprising result that acute stressors such as public speaking and exams cause measurable changes in earlobe temperature. We perform multi-day in-the-wild experiments and confirm the temperature changes caused by these daily activities in natural daily scenarios. This initial exploration seeks to provide a foundation for future automatic activity detection and earring-based wearables.
{"title":"Thermal Earring","authors":"Qiuyue Shirley Xue, Yujia Liu, Joseph Breda, Mastafa Springston, Vikram Iyer, Shwetak Patel","doi":"10.1145/3631440","DOIUrl":"https://doi.org/10.1145/3631440","url":null,"abstract":"Body temperature is an important vital sign which can indicate fever and is known to be correlated with activities such as eating, exercise and stress. However, continuous temperature monitoring poses a significant challenge. We present Thermal Earring, a first-of-its-kind smart earring that enables a reliable wearable solution for continuous temperature monitoring. The Thermal Earring takes advantage of the unique position of earrings in proximity to the head, a region with tight coupling to the body unlike watches and other wearables which are more loosely worn on extremities. We develop a hardware prototype in the form factor of real earrings measuring a maximum width of 11.3 mm and a length of 31 mm, weighing 335 mg, and consuming only 14.4 uW which enables a battery life of 28 days in real-world tests. We demonstrate this form factor is small and light enough to integrate into real jewelry with fashionable designs. Additionally, we develop a dual sensor design to differentiate human body temperature change from environmental changes. We explore the use of this novel sensing platform and find its measured earlobe temperatures are stable within ±0.32 °C during periods of rest. Using these promising results, we investigate its capability of detecting fever by gathering data from 5 febrile patients and 20 healthy participants. Further, we perform the first-ever investigation of the relationship between earlobe temperature and a variety of daily activities, demonstrating earlobe temperature changes related to eating and exercise. We also find the surprising result that acute stressors such as public speaking and exams cause measurable changes in earlobe temperature. We perform multi-day in-the-wild experiments and confirm the temperature changes caused by these daily activities in natural daily scenarios. This initial exploration seeks to provide a foundation for future automatic activity detection and earring-based wearables.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"4 11","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bill Yen, Laura Jaliff, Louis Gutierrez, Philothei Sahinidis, Sadie Bernstein, John Madden, Stephen Taylor, Colleen Josephson, Pat Pannuto, Weitao Shuai, George Wells, Nivedita Arora, Josiah D. Hester
Human-caused climate degradation and the explosion of electronic waste have pushed the computing community to explore fundamental alternatives to the current battery-powered, over-provisioned ubiquitous computing devices that need constant replacement and recharging. Soil Microbial Fuel Cells (SMFCs) offer promise as a renewable energy source that is biocompatible and viable in difficult environments where traditional batteries and solar panels fall short. However, SMFC development is in its infancy, and challenges like robustness to environmental factors and low power output stymie efforts to implement real-world applications in terrestrial environments. This work details a 2-year iterative process that uncovers barriers to practical SMFC design for powering electronics, which we address through a mechanistic understanding of SMFC theory from the literature. We present nine months of deployment data gathered from four SMFC experiments exploring cell geometries, resulting in an improved SMFC that generates power across a wider soil moisture range. From these experiments, we extracted key lessons and a testing framework, assessed SMFC's field performance, contextualized improvements with emerging and existing computing systems, and demonstrated the improved SMFC powering a wireless sensor for soil moisture and touch sensing. We contribute our data, methodology, and designs to establish the foundation for a sustainable, soil-powered future.
{"title":"Soil-Powered Computing","authors":"Bill Yen, Laura Jaliff, Louis Gutierrez, Philothei Sahinidis, Sadie Bernstein, John Madden, Stephen Taylor, Colleen Josephson, Pat Pannuto, Weitao Shuai, George Wells, Nivedita Arora, Josiah D. Hester","doi":"10.1145/3631410","DOIUrl":"https://doi.org/10.1145/3631410","url":null,"abstract":"Human-caused climate degradation and the explosion of electronic waste have pushed the computing community to explore fundamental alternatives to the current battery-powered, over-provisioned ubiquitous computing devices that need constant replacement and recharging. Soil Microbial Fuel Cells (SMFCs) offer promise as a renewable energy source that is biocompatible and viable in difficult environments where traditional batteries and solar panels fall short. However, SMFC development is in its infancy, and challenges like robustness to environmental factors and low power output stymie efforts to implement real-world applications in terrestrial environments. This work details a 2-year iterative process that uncovers barriers to practical SMFC design for powering electronics, which we address through a mechanistic understanding of SMFC theory from the literature. We present nine months of deployment data gathered from four SMFC experiments exploring cell geometries, resulting in an improved SMFC that generates power across a wider soil moisture range. From these experiments, we extracted key lessons and a testing framework, assessed SMFC's field performance, contextualized improvements with emerging and existing computing systems, and demonstrated the improved SMFC powering a wireless sensor for soil moisture and touch sensing. We contribute our data, methodology, and designs to establish the foundation for a sustainable, soil-powered future.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"12 7","pages":"1 - 40"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless sensing has demonstrated its potential of utilizing radio frequency (RF) signals to sense individuals and objects. Among different wireless signals, LoRa signal is particularly promising for through-wall sensing owing to its strong penetration capability. However, existing works view walls as a "bad" thing as they attenuate signal power and decrease the sensing coverage. In this paper, we show a counter-intuitive observation, i.e., walls can be used to increase the sensing coverage if the RF devices are placed properly with respect to walls. To fully understand the underlying principle behind this observation, we develop a through-wall sensing model to mathematically quantify the effect of walls. We further show that besides increasing the sensing coverage, we can also use the wall to help mitigate interference, which is one well-known issue in wireless sensing. We demonstrate the effect of wall through two representative applications, i.e., macro-level human walking sensing and micro-level human respiration monitoring. Comprehensive experiments show that by properly deploying the transmitter and receiver with respect to the wall, the coverage of human walking detection can be expanded by more than 160%. By leveraging the effect of wall to mitigate interference, we can sense the tiny respiration of target even in the presence of three interferers walking nearby.
{"title":"Wall Matters","authors":"Binbin Xie, Minhao Cui, Deepak Ganesan, Jie Xiong","doi":"10.1145/3631417","DOIUrl":"https://doi.org/10.1145/3631417","url":null,"abstract":"Wireless sensing has demonstrated its potential of utilizing radio frequency (RF) signals to sense individuals and objects. Among different wireless signals, LoRa signal is particularly promising for through-wall sensing owing to its strong penetration capability. However, existing works view walls as a \"bad\" thing as they attenuate signal power and decrease the sensing coverage. In this paper, we show a counter-intuitive observation, i.e., walls can be used to increase the sensing coverage if the RF devices are placed properly with respect to walls. To fully understand the underlying principle behind this observation, we develop a through-wall sensing model to mathematically quantify the effect of walls. We further show that besides increasing the sensing coverage, we can also use the wall to help mitigate interference, which is one well-known issue in wireless sensing. We demonstrate the effect of wall through two representative applications, i.e., macro-level human walking sensing and micro-level human respiration monitoring. Comprehensive experiments show that by properly deploying the transmitter and receiver with respect to the wall, the coverage of human walking detection can be expanded by more than 160%. By leveraging the effect of wall to mitigate interference, we can sense the tiny respiration of target even in the presence of three interferers walking nearby.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"2 4","pages":"1 - 22"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The widespread adoption of wearable devices has led to a surge in the development of multi-device wearable human activity recognition (WHAR) systems. Nevertheless, the performance of traditional supervised learning-based methods to WHAR is limited by the challenge of collecting ample annotated wearable data. To overcome this limitation, self-supervised learning (SSL) has emerged as a promising solution by first training a competent feature extractor on a substantial quantity of unlabeled data, followed by refining a minimal classifier with a small amount of labeled data. Despite the promise of SSL in WHAR, the majority of studies have not considered missing device scenarios in multi-device WHAR. To bridge this gap, we propose a multi-device SSL WHAR method termed Spatial-Temporal Masked Autoencoder (STMAE). STMAE captures discriminative activity representations by utilizing the asymmetrical encoder-decoder structure and two-stage spatial-temporal masking strategy, which can exploit the spatial-temporal correlations in multi-device data to improve the performance of SSL WHAR, especially on missing device scenarios. Experiments on four real-world datasets demonstrate the efficacy of STMAE in various practical scenarios.
{"title":"Spatial-Temporal Masked Autoencoder for Multi-Device Wearable Human Activity Recognition","authors":"Shenghuan Miao, Ling Chen, Rong Hu","doi":"10.1145/3631415","DOIUrl":"https://doi.org/10.1145/3631415","url":null,"abstract":"The widespread adoption of wearable devices has led to a surge in the development of multi-device wearable human activity recognition (WHAR) systems. Nevertheless, the performance of traditional supervised learning-based methods to WHAR is limited by the challenge of collecting ample annotated wearable data. To overcome this limitation, self-supervised learning (SSL) has emerged as a promising solution by first training a competent feature extractor on a substantial quantity of unlabeled data, followed by refining a minimal classifier with a small amount of labeled data. Despite the promise of SSL in WHAR, the majority of studies have not considered missing device scenarios in multi-device WHAR. To bridge this gap, we propose a multi-device SSL WHAR method termed Spatial-Temporal Masked Autoencoder (STMAE). STMAE captures discriminative activity representations by utilizing the asymmetrical encoder-decoder structure and two-stage spatial-temporal masking strategy, which can exploit the spatial-temporal correlations in multi-device data to improve the performance of SSL WHAR, especially on missing device scenarios. Experiments on four real-world datasets demonstrate the efficacy of STMAE in various practical scenarios.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"3 3","pages":"1 - 25"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federated Learning (FL) aims to improve machine learning privacy by allowing several data owners in edge and ubiquitous computing systems to collaboratively train a model, while preserving their local training data private, and sharing only model training parameters. However, FL systems remain vulnerable to privacy attacks, and in particular, to membership inference attacks that allow adversaries to determine whether a given data sample belongs to participants' training data, thus, raising a significant threat in sensitive ubiquitous computing systems. Indeed, membership inference attacks are based on a binary classifier that is able to differentiate between member data samples used to train a model and non-member data samples not used for training. In this context, several defense mechanisms, including differential privacy, have been proposed to counter such privacy attacks. However, the main drawback of these methods is that they may reduce model accuracy while incurring non-negligible computational costs. In this paper, we precisely address this problem with PASTEL, a FL privacy-preserving mechanism that is based on a novel multi-objective learning function. On the one hand, PASTEL decreases the generalization gap to reduce the difference between member data and non-member data, and on the other hand, PASTEL reduces model loss and leverages adaptive gradient descent optimization for preserving high model accuracy. Our experimental evaluations conducted on eight widely used datasets and five model architectures show that PASTEL significantly reduces membership inference attack success rates by up to -28%, reaching optimal privacy protection in most cases, with low to no perceptible impact on model accuracy.
{"title":"PASTEL","authors":"F. Elhattab, Sara Bouchenak, Cédric Boscher","doi":"10.1145/3633808","DOIUrl":"https://doi.org/10.1145/3633808","url":null,"abstract":"Federated Learning (FL) aims to improve machine learning privacy by allowing several data owners in edge and ubiquitous computing systems to collaboratively train a model, while preserving their local training data private, and sharing only model training parameters. However, FL systems remain vulnerable to privacy attacks, and in particular, to membership inference attacks that allow adversaries to determine whether a given data sample belongs to participants' training data, thus, raising a significant threat in sensitive ubiquitous computing systems. Indeed, membership inference attacks are based on a binary classifier that is able to differentiate between member data samples used to train a model and non-member data samples not used for training. In this context, several defense mechanisms, including differential privacy, have been proposed to counter such privacy attacks. However, the main drawback of these methods is that they may reduce model accuracy while incurring non-negligible computational costs. In this paper, we precisely address this problem with PASTEL, a FL privacy-preserving mechanism that is based on a novel multi-objective learning function. On the one hand, PASTEL decreases the generalization gap to reduce the difference between member data and non-member data, and on the other hand, PASTEL reduces model loss and leverages adaptive gradient descent optimization for preserving high model accuracy. Our experimental evaluations conducted on eight widely used datasets and five model architectures show that PASTEL significantly reduces membership inference attack success rates by up to -28%, reaching optimal privacy protection in most cases, with low to no perceptible impact on model accuracy.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"14 1","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}