Zhi Wang, Beihong Jin, Fusang Zhang, Siheng Li, Junqi Ma
Blood Pressure (BP) is a critical vital sign to assess cardiovascular health. However, existing cuff-based and wearable-based BP measurement methods require direct contact between the user's skin and the device, resulting in poor user experience and limited engagement for regular daily monitoring of BP. In this paper, we propose a contactless approach using Ultra-WideBand (UWB) signals for regular daily BP monitoring. To remove components of the received signals that are not related to the pulse waves, we propose two methods that utilize peak detection and principal component analysis to identify aliased and deformed parts. Furthermore, to extract BP-related features and improve the accuracy of BP prediction, particularly for hypertensive users, we construct a deep learning model that extracts features of pulse waves at different scales and identifies the different effects of features on BP. We build the corresponding BP monitoring system named RF-BP and conduct extensive experiments on both a public dataset and a self-built dataset. The experimental results show that RF-BP can accurately predict the BP of users and provide alerts for users with hypertension. Over the self-built dataset, the mean absolute error (MAE) and standard deviation (SD) for SBP are 6.5 mmHg and 6.1 mmHg, and the MAE and SD for DBP are 4.7 mmHg and 4.9 mmHg.
{"title":"UWB-enabled Sensing for Fast and Effortless Blood Pressure Monitoring","authors":"Zhi Wang, Beihong Jin, Fusang Zhang, Siheng Li, Junqi Ma","doi":"10.1145/3659617","DOIUrl":"https://doi.org/10.1145/3659617","url":null,"abstract":"Blood Pressure (BP) is a critical vital sign to assess cardiovascular health. However, existing cuff-based and wearable-based BP measurement methods require direct contact between the user's skin and the device, resulting in poor user experience and limited engagement for regular daily monitoring of BP. In this paper, we propose a contactless approach using Ultra-WideBand (UWB) signals for regular daily BP monitoring. To remove components of the received signals that are not related to the pulse waves, we propose two methods that utilize peak detection and principal component analysis to identify aliased and deformed parts. Furthermore, to extract BP-related features and improve the accuracy of BP prediction, particularly for hypertensive users, we construct a deep learning model that extracts features of pulse waves at different scales and identifies the different effects of features on BP. We build the corresponding BP monitoring system named RF-BP and conduct extensive experiments on both a public dataset and a self-built dataset. The experimental results show that RF-BP can accurately predict the BP of users and provide alerts for users with hypertension. Over the self-built dataset, the mean absolute error (MAE) and standard deviation (SD) for SBP are 6.5 mmHg and 6.1 mmHg, and the MAE and SD for DBP are 4.7 mmHg and 4.9 mmHg.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140983696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenwei Li, Ruiyang Gao, Jie Xiong, Jiarun Zhou, Leye Wang, Xingjian Mao, E. Yi, Daqing Zhang
Passive tracking plays a fundamental role in numerous applications such as elderly care, security surveillance, and smart home. To utilize ubiquitous WiFi signals for passive tracking, the Doppler speed extracted from WiFi CSI (Channel State Information) is the key information. Despite the progress made, existing approaches still require a large number of samples to achieve accurate Doppler speed estimation. To enable WiFi sensing with minimum amount of interference on WiFi communication, accurate Doppler speed estimation with fewer CSI samples is crucial. To achieve this, we build a passive WiFi tracking system which employs a novel CSI difference paradigm instead of CSI for Doppler speed estimation. In this paper, we provide the first deep dive into the potential of CSI difference for fine-grained Doppler speed estimation. Theoretically, our new design allows us to estimate Doppler speed with just three samples. While conventional methods only adopt phase information for Doppler estimation, we creatively fuse both phase and amplitude information to improve Doppler estimation accuracy. Extensive experiments show that our solution outperforms the state-of-the-art approaches, achieving higher accuracy with fewer CSI samples. Based on this proposed WiFi-CSI difference paradigm, we build a prototype passive tracking system which can accurately track a person with a median error lower than 34 cm, achieving similar accuracy compared to the state-of-the-art systems, while significantly reducing the required number of samples to only 5%.
{"title":"WiFi-CSI Difference Paradigm","authors":"Wenwei Li, Ruiyang Gao, Jie Xiong, Jiarun Zhou, Leye Wang, Xingjian Mao, E. Yi, Daqing Zhang","doi":"10.1145/3659608","DOIUrl":"https://doi.org/10.1145/3659608","url":null,"abstract":"Passive tracking plays a fundamental role in numerous applications such as elderly care, security surveillance, and smart home. To utilize ubiquitous WiFi signals for passive tracking, the Doppler speed extracted from WiFi CSI (Channel State Information) is the key information. Despite the progress made, existing approaches still require a large number of samples to achieve accurate Doppler speed estimation. To enable WiFi sensing with minimum amount of interference on WiFi communication, accurate Doppler speed estimation with fewer CSI samples is crucial. To achieve this, we build a passive WiFi tracking system which employs a novel CSI difference paradigm instead of CSI for Doppler speed estimation. In this paper, we provide the first deep dive into the potential of CSI difference for fine-grained Doppler speed estimation. Theoretically, our new design allows us to estimate Doppler speed with just three samples. While conventional methods only adopt phase information for Doppler estimation, we creatively fuse both phase and amplitude information to improve Doppler estimation accuracy. Extensive experiments show that our solution outperforms the state-of-the-art approaches, achieving higher accuracy with fewer CSI samples. Based on this proposed WiFi-CSI difference paradigm, we build a prototype passive tracking system which can accurately track a person with a median error lower than 34 cm, achieving similar accuracy compared to the state-of-the-art systems, while significantly reducing the required number of samples to only 5%.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140983124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bhawana Chhaglani, Camellia Zakaria, Richard Peltier, Jeremy Gummeson, Prashant J. Shenoy
The types of human activities occupants are engaged in within indoor spaces significantly contribute to the spread of airborne diseases through emitting aerosol particles. Today, ubiquitous computing technologies can inform users of common atmosphere pollutants for indoor air quality. However, they remain uninformed of the rate of aerosol generated directly from human respiratory activities, a fundamental parameter impacting the risk of airborne transmission. In this paper, we present AeroSense, a novel privacy-preserving approach using audio sensing to accurately predict the rate of aerosol generated from detecting the kinds of human respiratory activities and determining the loudness of these activities. Our system adopts a privacy-first as a key design choice; thus, it only extracts audio features that cannot be reconstructed into human audible signals using two omnidirectional microphone arrays. We employ a combination of binary classifiers using the Random Forest algorithm to detect simultaneous occurrences of activities with an average recall of 85%. It determines the level of all detected activities by estimating the distance between the microphone and the activity source. This level estimation technique yields an average of 7.74% error. Additionally, we developed a lightweight mask detection classifier to detect mask-wearing, which yields a recall score of 75%. These intermediary outputs are critical predictors needed for AeroSense to estimate the amounts of aerosol generated from an active human source. Our model to predict aerosol is a Random Forest regression model, which yields 2.34 MSE and 0.73 r2 value. We demonstrate the accuracy of AeroSense by validating our results in a cleanroom setup and using advanced microbiological technology. We present results on the efficacy of AeroSense in natural settings through controlled and in-the-wild experiments. The ability to estimate aerosol emissions from detected human activities is part of a more extensive indoor air system integration, which can capture the rate of aerosol dissipation and inform users of airborne transmission risks in real time.
{"title":"AeroSense: Sensing Aerosol Emissions from Indoor Human Activities","authors":"Bhawana Chhaglani, Camellia Zakaria, Richard Peltier, Jeremy Gummeson, Prashant J. Shenoy","doi":"10.1145/3659593","DOIUrl":"https://doi.org/10.1145/3659593","url":null,"abstract":"The types of human activities occupants are engaged in within indoor spaces significantly contribute to the spread of airborne diseases through emitting aerosol particles. Today, ubiquitous computing technologies can inform users of common atmosphere pollutants for indoor air quality. However, they remain uninformed of the rate of aerosol generated directly from human respiratory activities, a fundamental parameter impacting the risk of airborne transmission. In this paper, we present AeroSense, a novel privacy-preserving approach using audio sensing to accurately predict the rate of aerosol generated from detecting the kinds of human respiratory activities and determining the loudness of these activities. Our system adopts a privacy-first as a key design choice; thus, it only extracts audio features that cannot be reconstructed into human audible signals using two omnidirectional microphone arrays. We employ a combination of binary classifiers using the Random Forest algorithm to detect simultaneous occurrences of activities with an average recall of 85%. It determines the level of all detected activities by estimating the distance between the microphone and the activity source. This level estimation technique yields an average of 7.74% error. Additionally, we developed a lightweight mask detection classifier to detect mask-wearing, which yields a recall score of 75%. These intermediary outputs are critical predictors needed for AeroSense to estimate the amounts of aerosol generated from an active human source. Our model to predict aerosol is a Random Forest regression model, which yields 2.34 MSE and 0.73 r2 value. We demonstrate the accuracy of AeroSense by validating our results in a cleanroom setup and using advanced microbiological technology. We present results on the efficacy of AeroSense in natural settings through controlled and in-the-wild experiments. The ability to estimate aerosol emissions from detected human activities is part of a more extensive indoor air system integration, which can capture the rate of aerosol dissipation and inform users of airborne transmission risks in real time.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140984155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Changshuo Hu, Thivya Kandappu, Yang Liu, Cecilia Mascolo, Dong Ma
Running is a popular and accessible form of aerobic exercise, significantly benefiting our health and wellness. By monitoring a range of running parameters with wearable devices, runners can gain a deep understanding of their running behavior, facilitating performance improvement in future runs. Among these parameters, breathing, which fuels our bodies with oxygen and expels carbon dioxide, is crucial to improving the efficiency of running. While previous studies have made substantial progress in measuring breathing rate, exploration of additional breathing monitoring during running is still lacking. In this work, we fill this gap by presenting BreathPro, the first breathing mode monitoring system for running. It leverages the in-ear microphone on earables to record breathing sounds and combines the out-ear microphone on the same device to mitigate external noises, thereby enhancing the clarity of in-ear breathing sounds. BreathPro incorporates a suite of well-designed signal processing and machine learning techniques to enable breathing mode detection with superior accuracy. We implemented BreathPro as a smartphone application and demonstrated its energy-efficient and real-time execution.
{"title":"BreathPro: Monitoring Breathing Mode during Running with Earables","authors":"Changshuo Hu, Thivya Kandappu, Yang Liu, Cecilia Mascolo, Dong Ma","doi":"10.1145/3659607","DOIUrl":"https://doi.org/10.1145/3659607","url":null,"abstract":"Running is a popular and accessible form of aerobic exercise, significantly benefiting our health and wellness. By monitoring a range of running parameters with wearable devices, runners can gain a deep understanding of their running behavior, facilitating performance improvement in future runs. Among these parameters, breathing, which fuels our bodies with oxygen and expels carbon dioxide, is crucial to improving the efficiency of running. While previous studies have made substantial progress in measuring breathing rate, exploration of additional breathing monitoring during running is still lacking. In this work, we fill this gap by presenting BreathPro, the first breathing mode monitoring system for running. It leverages the in-ear microphone on earables to record breathing sounds and combines the out-ear microphone on the same device to mitigate external noises, thereby enhancing the clarity of in-ear breathing sounds. BreathPro incorporates a suite of well-designed signal processing and machine learning techniques to enable breathing mode detection with superior accuracy. We implemented BreathPro as a smartphone application and demonstrated its energy-efficient and real-time execution.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140985937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Personal health technologies (PHTs) often do not consider the accessibility needs of blind individuals, preventing access to their capabilities and data. However, despite the accessibility barriers, some blind individuals persistently use such systems and even express satisfaction with them. To obtain a deeper understanding of blind users' prolonged experiences in PHTs, we interviewed 11 individuals who continue to use such technologies, discussing and observing their past and current interactions with their systems. We report on usability issues blind users encounter and how they adapt to these situations, and theories for the persistent use of PHTs in the face of poor accessibility. We reflect on strategies to improve the accessibility and usability of PHTs for blind users, as well as ideas to aid the normalization of accessible features within these systems.
{"title":"Identify, Adapt, Persist","authors":"Jarrett G.W. Lee, Bongshin Lee, Soyoung Choi, JooYoung Seo, Eun Kyoung Choe","doi":"10.1145/3659585","DOIUrl":"https://doi.org/10.1145/3659585","url":null,"abstract":"Personal health technologies (PHTs) often do not consider the accessibility needs of blind individuals, preventing access to their capabilities and data. However, despite the accessibility barriers, some blind individuals persistently use such systems and even express satisfaction with them. To obtain a deeper understanding of blind users' prolonged experiences in PHTs, we interviewed 11 individuals who continue to use such technologies, discussing and observing their past and current interactions with their systems. We report on usability issues blind users encounter and how they adapt to these situations, and theories for the persistent use of PHTs in the face of poor accessibility. We reflect on strategies to improve the accessibility and usability of PHTs for blind users, as well as ideas to aid the normalization of accessible features within these systems.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140984213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Irmandy Wicaksono, Aditi Maheshwari, Don Derek Haddad, Joe Paradiso, Andreea Danielescu
The merging of electronic materials and textiles has triggered the proliferation of wearables and interactive surfaces in the ubiquitous computing era. However, this leads to e-textile waste that is difficult to recycle and decompose. Instead, we demonstrate an eco-design approach to upcycle waste cotton fabrics into functional textile elements through carbonization without the need for additional materials. We identify optimal parameters for the carbonization process and develop encapsulation techniques to improve the response, durability, and washability of the carbonized textiles. We then configure these e-textiles into various 'design primitives' including sensors, interconnects, and heating elements, and evaluate their electromechanical properties against commercially available e-textiles. Using these primitives, we demonstrate several applications, including a haptic-transfer fabric, a joint-sensing wearable, and an intelligent sailcloth. Finally, we highlight how the sensors can be composted, re-carbonized and coated onto other fabrics, or repurposed into different sensors towards their end-of-life to promote a circular manufacturing process.
{"title":"Design and Fabrication of Multifunctional E-Textiles by Upcycling Waste Cotton Fabrics through Carbonization","authors":"Irmandy Wicaksono, Aditi Maheshwari, Don Derek Haddad, Joe Paradiso, Andreea Danielescu","doi":"10.1145/3659588","DOIUrl":"https://doi.org/10.1145/3659588","url":null,"abstract":"The merging of electronic materials and textiles has triggered the proliferation of wearables and interactive surfaces in the ubiquitous computing era. However, this leads to e-textile waste that is difficult to recycle and decompose. Instead, we demonstrate an eco-design approach to upcycle waste cotton fabrics into functional textile elements through carbonization without the need for additional materials. We identify optimal parameters for the carbonization process and develop encapsulation techniques to improve the response, durability, and washability of the carbonized textiles. We then configure these e-textiles into various 'design primitives' including sensors, interconnects, and heating elements, and evaluate their electromechanical properties against commercially available e-textiles. Using these primitives, we demonstrate several applications, including a haptic-transfer fabric, a joint-sensing wearable, and an intelligent sailcloth. Finally, we highlight how the sensors can be composted, re-carbonized and coated onto other fabrics, or repurposed into different sensors towards their end-of-life to promote a circular manufacturing process.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140985199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emerging electrochromic (EC) materials have advanced the frontier of thin-film, low-power, and non-emissive display technologies. While suitable for wearable or textile-based applications, current EC display systems are manufactured in fixed, pre-designed patterns that hinder the potential of reconfigurable display technologies desired by on-skin interactions. To realize the customizable and scalable EC display for skin wear, this paper introduces ECSkin, a construction toolkit composed of modular EC films. Our approach enables reconfigurable designs that display customized patterns by arranging combinations of premade EC modules. An ECSkin device can pixelate patterns and expand the display area through tessellating congruent modules. We present the fabrication of flexible EC display modules with accessible materials and tools. We performed technical evaluations to characterize the electrochromic performance and conducted user evaluations to verify the toolkit's usability and feasibility. Two example applications demonstrate the adaptiveness of the modular display on different body locations and user scenarios.
{"title":"ECSkin: Tessellating Electrochromic Films for Reconfigurable On-skin Displays","authors":"Pin-Sung Ku, Shuwen Jiang, Wei-Hsin Wang, H. Kao","doi":"10.1145/3659613","DOIUrl":"https://doi.org/10.1145/3659613","url":null,"abstract":"Emerging electrochromic (EC) materials have advanced the frontier of thin-film, low-power, and non-emissive display technologies. While suitable for wearable or textile-based applications, current EC display systems are manufactured in fixed, pre-designed patterns that hinder the potential of reconfigurable display technologies desired by on-skin interactions. To realize the customizable and scalable EC display for skin wear, this paper introduces ECSkin, a construction toolkit composed of modular EC films. Our approach enables reconfigurable designs that display customized patterns by arranging combinations of premade EC modules. An ECSkin device can pixelate patterns and expand the display area through tessellating congruent modules. We present the fabrication of flexible EC display modules with accessible materials and tools. We performed technical evaluations to characterize the electrochromic performance and conducted user evaluations to verify the toolkit's usability and feasibility. Two example applications demonstrate the adaptiveness of the modular display on different body locations and user scenarios.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140984340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present Snow, a cross-modal interface that integrates cold and tactile stimuli in mid-air to create snowflakes and raindrops for VR experiences. Snow uses six Peltier packs and an ultrasound haptic display to create unique cold-tactile sensations for users to experience catching snowflakes and getting rained on their bare hands. Our approach considers humans' ability to identify tactile and cold stimuli without masking each other when projected onto the same location on their skin, making illusions of snowflakes and raindrops. We design both visual and haptic renderings to be tightly coupled to present snow melting and rain droplets for realistic visuo-tactile experiences. For multiple snowflakes and raindrops rendering, we propose an aggregated haptic scheme to simulate heavy snowfall and rainfall environments with many visual particles. The results show that the aggregated haptic rendering scheme demonstrates a more realistic experience than other schemes. We also confirm that our approach of providing cold-tactile cues enhances the user experiences in both conditions compared to other modality conditions.
{"title":"Let It Snow: Designing Snowfall Experience in VR","authors":"Haokun Wang, Yatharth Singhal, Jin Ryong Kim","doi":"10.1145/3659587","DOIUrl":"https://doi.org/10.1145/3659587","url":null,"abstract":"We present Snow, a cross-modal interface that integrates cold and tactile stimuli in mid-air to create snowflakes and raindrops for VR experiences. Snow uses six Peltier packs and an ultrasound haptic display to create unique cold-tactile sensations for users to experience catching snowflakes and getting rained on their bare hands. Our approach considers humans' ability to identify tactile and cold stimuli without masking each other when projected onto the same location on their skin, making illusions of snowflakes and raindrops. We design both visual and haptic renderings to be tightly coupled to present snow melting and rain droplets for realistic visuo-tactile experiences. For multiple snowflakes and raindrops rendering, we propose an aggregated haptic scheme to simulate heavy snowfall and rainfall environments with many visual particles. The results show that the aggregated haptic rendering scheme demonstrates a more realistic experience than other schemes. We also confirm that our approach of providing cold-tactile cues enhances the user experiences in both conditions compared to other modality conditions.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140982987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zachary Englhardt, Chengqian Ma, Margaret E. Morris, Chun-Cheng Chang, Xuhai "Orson" Xu, Lianhui Qin, Daniel McDuff, Xin Liu, Shwetak Patel, Vikram Iyer
Passively collected behavioral health data from ubiquitous sensors could provide mental health professionals valuable insights into patient's daily lives, but such efforts are impeded by disparate metrics, lack of interoperability, and unclear correlations between the measured signals and an individual's mental health. To address these challenges, we pioneer the exploration of large language models (LLMs) to synthesize clinically relevant insights from multi-sensor data. We develop chain-of-thought prompting methods to generate LLM reasoning on how data pertaining to activity, sleep and social interaction relate to conditions such as depression and anxiety. We then prompt the LLM to perform binary classification, achieving accuracies of 61.1%, exceeding the state of the art. We find models like GPT-4 correctly reference numerical data 75% of the time. While we began our investigation by developing methods to use LLMs to output binary classifications for conditions like depression, we find instead that their greatest potential value to clinicians lies not in diagnostic classification, but rather in rigorous analysis of diverse self-tracking data to generate natural language summaries that synthesize multiple data streams and identify potential concerns. Clinicians envisioned using these insights in a variety of ways, principally for fostering collaborative investigation with patients to strengthen the therapeutic alliance and guide treatment. We describe this collaborative engagement, additional envisioned uses, and associated concerns that must be addressed before adoption in real-world contexts.
{"title":"From Classification to Clinical Insights","authors":"Zachary Englhardt, Chengqian Ma, Margaret E. Morris, Chun-Cheng Chang, Xuhai \"Orson\" Xu, Lianhui Qin, Daniel McDuff, Xin Liu, Shwetak Patel, Vikram Iyer","doi":"10.1145/3659604","DOIUrl":"https://doi.org/10.1145/3659604","url":null,"abstract":"Passively collected behavioral health data from ubiquitous sensors could provide mental health professionals valuable insights into patient's daily lives, but such efforts are impeded by disparate metrics, lack of interoperability, and unclear correlations between the measured signals and an individual's mental health. To address these challenges, we pioneer the exploration of large language models (LLMs) to synthesize clinically relevant insights from multi-sensor data. We develop chain-of-thought prompting methods to generate LLM reasoning on how data pertaining to activity, sleep and social interaction relate to conditions such as depression and anxiety. We then prompt the LLM to perform binary classification, achieving accuracies of 61.1%, exceeding the state of the art. We find models like GPT-4 correctly reference numerical data 75% of the time.\u0000 While we began our investigation by developing methods to use LLMs to output binary classifications for conditions like depression, we find instead that their greatest potential value to clinicians lies not in diagnostic classification, but rather in rigorous analysis of diverse self-tracking data to generate natural language summaries that synthesize multiple data streams and identify potential concerns. Clinicians envisioned using these insights in a variety of ways, principally for fostering collaborative investigation with patients to strengthen the therapeutic alliance and guide treatment. We describe this collaborative engagement, additional envisioned uses, and associated concerns that must be addressed before adoption in real-world contexts.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140985152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chitralekha Gupta, Shreyas Sridhar, Denys J. C. Matthies, Christophe Jouffrais, Suranga Nanayakkara
Spatial awareness, particularly awareness of distant environmental scenes known as vista-space, is crucial and contributes to the cognitive and aesthetic needs of People with Visual Impairments (PVI). In this work, through a formative study with PVIs, we establish the need for vista-space awareness amongst people with visual impairments, and the possible scenarios where this awareness would be helpful. We investigate the potential of existing sonification techniques as well as AI-based audio generative models to design sounds that can create awareness of vista-space scenes. Our first user study, consisting of a listening test with sighted participants as well as PVIs, suggests that current AI generative models for audio have the potential to produce sounds that are comparable to existing sonification techniques in communicating sonic objects and scenes in terms of their intuitiveness, and learnability. Furthermore, through a wizard-of-oz study with PVIs, we demonstrate the utility of AI-generated sounds as well as scene audio recordings as auditory icons to provide vista-scene awareness, in the contexts of navigation and leisure. This is the first step towards addressing the need for vista-space awareness and experience in PVIs.
{"title":"SonicVista: Towards Creating Awareness of Distant Scenes through Sonification","authors":"Chitralekha Gupta, Shreyas Sridhar, Denys J. C. Matthies, Christophe Jouffrais, Suranga Nanayakkara","doi":"10.1145/3659609","DOIUrl":"https://doi.org/10.1145/3659609","url":null,"abstract":"Spatial awareness, particularly awareness of distant environmental scenes known as vista-space, is crucial and contributes to the cognitive and aesthetic needs of People with Visual Impairments (PVI). In this work, through a formative study with PVIs, we establish the need for vista-space awareness amongst people with visual impairments, and the possible scenarios where this awareness would be helpful. We investigate the potential of existing sonification techniques as well as AI-based audio generative models to design sounds that can create awareness of vista-space scenes. Our first user study, consisting of a listening test with sighted participants as well as PVIs, suggests that current AI generative models for audio have the potential to produce sounds that are comparable to existing sonification techniques in communicating sonic objects and scenes in terms of their intuitiveness, and learnability. Furthermore, through a wizard-of-oz study with PVIs, we demonstrate the utility of AI-generated sounds as well as scene audio recordings as auditory icons to provide vista-scene awareness, in the contexts of navigation and leisure. This is the first step towards addressing the need for vista-space awareness and experience in PVIs.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140982859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}