首页 > 最新文献

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies最新文献

英文 中文
UWB-enabled Sensing for Fast and Effortless Blood Pressure Monitoring 利用 UWB 传感技术快速、轻松地监测血压
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659617
Zhi Wang, Beihong Jin, Fusang Zhang, Siheng Li, Junqi Ma
Blood Pressure (BP) is a critical vital sign to assess cardiovascular health. However, existing cuff-based and wearable-based BP measurement methods require direct contact between the user's skin and the device, resulting in poor user experience and limited engagement for regular daily monitoring of BP. In this paper, we propose a contactless approach using Ultra-WideBand (UWB) signals for regular daily BP monitoring. To remove components of the received signals that are not related to the pulse waves, we propose two methods that utilize peak detection and principal component analysis to identify aliased and deformed parts. Furthermore, to extract BP-related features and improve the accuracy of BP prediction, particularly for hypertensive users, we construct a deep learning model that extracts features of pulse waves at different scales and identifies the different effects of features on BP. We build the corresponding BP monitoring system named RF-BP and conduct extensive experiments on both a public dataset and a self-built dataset. The experimental results show that RF-BP can accurately predict the BP of users and provide alerts for users with hypertension. Over the self-built dataset, the mean absolute error (MAE) and standard deviation (SD) for SBP are 6.5 mmHg and 6.1 mmHg, and the MAE and SD for DBP are 4.7 mmHg and 4.9 mmHg.
血压(BP)是评估心血管健康的重要生命体征。然而,现有的袖带式和可穿戴式血压测量方法需要用户的皮肤与设备直接接触,导致用户体验不佳,日常定期监测血压的参与度有限。在本文中,我们提出了一种利用超宽带(UWB)信号进行日常血压定期监测的非接触式方法。为了去除接收信号中与脉搏波无关的成分,我们提出了两种方法,利用峰值检测和主成分分析来识别混叠和变形部分。此外,为了提取与血压相关的特征并提高血压预测的准确性,尤其是针对高血压用户,我们构建了一个深度学习模型,以提取不同尺度脉搏波的特征,并识别特征对血压的不同影响。我们建立了名为 RF-BP 的相应血压监测系统,并在公共数据集和自建数据集上进行了大量实验。实验结果表明,RF-BP 可以准确预测用户的血压,并为患有高血压的用户提供警报。在自建数据集上,SBP 的平均绝对误差(MAE)和标准偏差(SD)分别为 6.5 mmHg 和 6.1 mmHg,DBP 的平均绝对误差(MAE)和标准偏差(SD)分别为 4.7 mmHg 和 4.9 mmHg。
{"title":"UWB-enabled Sensing for Fast and Effortless Blood Pressure Monitoring","authors":"Zhi Wang, Beihong Jin, Fusang Zhang, Siheng Li, Junqi Ma","doi":"10.1145/3659617","DOIUrl":"https://doi.org/10.1145/3659617","url":null,"abstract":"Blood Pressure (BP) is a critical vital sign to assess cardiovascular health. However, existing cuff-based and wearable-based BP measurement methods require direct contact between the user's skin and the device, resulting in poor user experience and limited engagement for regular daily monitoring of BP. In this paper, we propose a contactless approach using Ultra-WideBand (UWB) signals for regular daily BP monitoring. To remove components of the received signals that are not related to the pulse waves, we propose two methods that utilize peak detection and principal component analysis to identify aliased and deformed parts. Furthermore, to extract BP-related features and improve the accuracy of BP prediction, particularly for hypertensive users, we construct a deep learning model that extracts features of pulse waves at different scales and identifies the different effects of features on BP. We build the corresponding BP monitoring system named RF-BP and conduct extensive experiments on both a public dataset and a self-built dataset. The experimental results show that RF-BP can accurately predict the BP of users and provide alerts for users with hypertension. Over the self-built dataset, the mean absolute error (MAE) and standard deviation (SD) for SBP are 6.5 mmHg and 6.1 mmHg, and the MAE and SD for DBP are 4.7 mmHg and 4.9 mmHg.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140983696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WiFi-CSI Difference Paradigm WiFi-CSI 差异范式
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659608
Wenwei Li, Ruiyang Gao, Jie Xiong, Jiarun Zhou, Leye Wang, Xingjian Mao, E. Yi, Daqing Zhang
Passive tracking plays a fundamental role in numerous applications such as elderly care, security surveillance, and smart home. To utilize ubiquitous WiFi signals for passive tracking, the Doppler speed extracted from WiFi CSI (Channel State Information) is the key information. Despite the progress made, existing approaches still require a large number of samples to achieve accurate Doppler speed estimation. To enable WiFi sensing with minimum amount of interference on WiFi communication, accurate Doppler speed estimation with fewer CSI samples is crucial. To achieve this, we build a passive WiFi tracking system which employs a novel CSI difference paradigm instead of CSI for Doppler speed estimation. In this paper, we provide the first deep dive into the potential of CSI difference for fine-grained Doppler speed estimation. Theoretically, our new design allows us to estimate Doppler speed with just three samples. While conventional methods only adopt phase information for Doppler estimation, we creatively fuse both phase and amplitude information to improve Doppler estimation accuracy. Extensive experiments show that our solution outperforms the state-of-the-art approaches, achieving higher accuracy with fewer CSI samples. Based on this proposed WiFi-CSI difference paradigm, we build a prototype passive tracking system which can accurately track a person with a median error lower than 34 cm, achieving similar accuracy compared to the state-of-the-art systems, while significantly reducing the required number of samples to only 5%.
被动跟踪在老年人护理、安全监控和智能家居等众多应用中发挥着重要作用。要利用无处不在的 WiFi 信号进行被动跟踪,从 WiFi CSI(信道状态信息)中提取的多普勒速度是关键信息。尽管取得了进展,但现有方法仍需要大量样本才能实现准确的多普勒速度估计。为了在WiFi通信受到最小干扰的情况下实现WiFi传感,使用较少的CSI样本进行精确的多普勒速度估计至关重要。为此,我们建立了一个无源 WiFi 跟踪系统,该系统采用了一种新颖的 CSI 差分范例来代替 CSI 进行多普勒速度估计。在本文中,我们首次深入探讨了 CSI 差分在细粒度多普勒速度估计方面的潜力。从理论上讲,我们的新设计只需三个样本就能估计多普勒速度。传统方法仅采用相位信息进行多普勒估计,而我们创造性地融合了相位和振幅信息,从而提高了多普勒估计的准确性。广泛的实验表明,我们的解决方案优于最先进的方法,以更少的 CSI 样本实现了更高的精度。基于所提出的 WiFi-CSI 差分范例,我们构建了一个原型无源跟踪系统,该系统可以精确跟踪一个人,中位误差低于 34 厘米,与最先进的系统相比达到了类似的精确度,同时将所需的样本数量大幅减少到仅 5%。
{"title":"WiFi-CSI Difference Paradigm","authors":"Wenwei Li, Ruiyang Gao, Jie Xiong, Jiarun Zhou, Leye Wang, Xingjian Mao, E. Yi, Daqing Zhang","doi":"10.1145/3659608","DOIUrl":"https://doi.org/10.1145/3659608","url":null,"abstract":"Passive tracking plays a fundamental role in numerous applications such as elderly care, security surveillance, and smart home. To utilize ubiquitous WiFi signals for passive tracking, the Doppler speed extracted from WiFi CSI (Channel State Information) is the key information. Despite the progress made, existing approaches still require a large number of samples to achieve accurate Doppler speed estimation. To enable WiFi sensing with minimum amount of interference on WiFi communication, accurate Doppler speed estimation with fewer CSI samples is crucial. To achieve this, we build a passive WiFi tracking system which employs a novel CSI difference paradigm instead of CSI for Doppler speed estimation. In this paper, we provide the first deep dive into the potential of CSI difference for fine-grained Doppler speed estimation. Theoretically, our new design allows us to estimate Doppler speed with just three samples. While conventional methods only adopt phase information for Doppler estimation, we creatively fuse both phase and amplitude information to improve Doppler estimation accuracy. Extensive experiments show that our solution outperforms the state-of-the-art approaches, achieving higher accuracy with fewer CSI samples. Based on this proposed WiFi-CSI difference paradigm, we build a prototype passive tracking system which can accurately track a person with a median error lower than 34 cm, achieving similar accuracy compared to the state-of-the-art systems, while significantly reducing the required number of samples to only 5%.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140983124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AeroSense: Sensing Aerosol Emissions from Indoor Human Activities AeroSense:感知室内人类活动的气溶胶排放
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659593
Bhawana Chhaglani, Camellia Zakaria, Richard Peltier, Jeremy Gummeson, Prashant J. Shenoy
The types of human activities occupants are engaged in within indoor spaces significantly contribute to the spread of airborne diseases through emitting aerosol particles. Today, ubiquitous computing technologies can inform users of common atmosphere pollutants for indoor air quality. However, they remain uninformed of the rate of aerosol generated directly from human respiratory activities, a fundamental parameter impacting the risk of airborne transmission. In this paper, we present AeroSense, a novel privacy-preserving approach using audio sensing to accurately predict the rate of aerosol generated from detecting the kinds of human respiratory activities and determining the loudness of these activities. Our system adopts a privacy-first as a key design choice; thus, it only extracts audio features that cannot be reconstructed into human audible signals using two omnidirectional microphone arrays. We employ a combination of binary classifiers using the Random Forest algorithm to detect simultaneous occurrences of activities with an average recall of 85%. It determines the level of all detected activities by estimating the distance between the microphone and the activity source. This level estimation technique yields an average of 7.74% error. Additionally, we developed a lightweight mask detection classifier to detect mask-wearing, which yields a recall score of 75%. These intermediary outputs are critical predictors needed for AeroSense to estimate the amounts of aerosol generated from an active human source. Our model to predict aerosol is a Random Forest regression model, which yields 2.34 MSE and 0.73 r2 value. We demonstrate the accuracy of AeroSense by validating our results in a cleanroom setup and using advanced microbiological technology. We present results on the efficacy of AeroSense in natural settings through controlled and in-the-wild experiments. The ability to estimate aerosol emissions from detected human activities is part of a more extensive indoor air system integration, which can capture the rate of aerosol dissipation and inform users of airborne transmission risks in real time.
居住者在室内空间所从事的人类活动类型会通过释放气溶胶粒子,在很大程度上导致空气传播疾病。如今,无所不在的计算技术可以告知用户有关室内空气质量的常见大气污染物。然而,他们仍然无法获知人类呼吸活动直接产生的气溶胶的比率,而这是影响空气传播风险的一个基本参数。在本文中,我们介绍了 AeroSense,这是一种新型的隐私保护方法,它利用音频传感技术,通过检测人类呼吸活动的种类和确定这些活动的响度来准确预测气溶胶的产生率。我们的系统将隐私优先作为关键的设计选择;因此,它只提取无法通过两个全向麦克风阵列重构为人类可听信号的音频特征。我们采用随机森林算法的二元分类器组合来检测同时发生的活动,平均召回率为 85%。它通过估计麦克风与活动源之间的距离来确定所有检测到的活动的级别。这种水平估计技术产生的平均误差为 7.74%。此外,我们还开发了一个轻量级面具检测分类器来检测戴面具的情况,其召回率为 75%。这些中间输出是 AeroSense 估算活动人源产生的气溶胶量所需的关键预测因子。我们预测气溶胶的模型是随机森林回归模型,其MSE值为2.34,r2值为0.73。我们在洁净室设置中使用先进的微生物技术验证了我们的结果,从而证明了 AeroSense 的准确性。我们通过受控实验和野外实验展示了 AeroSense 在自然环境中的功效。估算检测到的人类活动产生的气溶胶排放量的能力是更广泛的室内空气系统集成的一部分,它可以捕捉气溶胶的消散速度,并实时告知用户空气传播风险。
{"title":"AeroSense: Sensing Aerosol Emissions from Indoor Human Activities","authors":"Bhawana Chhaglani, Camellia Zakaria, Richard Peltier, Jeremy Gummeson, Prashant J. Shenoy","doi":"10.1145/3659593","DOIUrl":"https://doi.org/10.1145/3659593","url":null,"abstract":"The types of human activities occupants are engaged in within indoor spaces significantly contribute to the spread of airborne diseases through emitting aerosol particles. Today, ubiquitous computing technologies can inform users of common atmosphere pollutants for indoor air quality. However, they remain uninformed of the rate of aerosol generated directly from human respiratory activities, a fundamental parameter impacting the risk of airborne transmission. In this paper, we present AeroSense, a novel privacy-preserving approach using audio sensing to accurately predict the rate of aerosol generated from detecting the kinds of human respiratory activities and determining the loudness of these activities. Our system adopts a privacy-first as a key design choice; thus, it only extracts audio features that cannot be reconstructed into human audible signals using two omnidirectional microphone arrays. We employ a combination of binary classifiers using the Random Forest algorithm to detect simultaneous occurrences of activities with an average recall of 85%. It determines the level of all detected activities by estimating the distance between the microphone and the activity source. This level estimation technique yields an average of 7.74% error. Additionally, we developed a lightweight mask detection classifier to detect mask-wearing, which yields a recall score of 75%. These intermediary outputs are critical predictors needed for AeroSense to estimate the amounts of aerosol generated from an active human source. Our model to predict aerosol is a Random Forest regression model, which yields 2.34 MSE and 0.73 r2 value. We demonstrate the accuracy of AeroSense by validating our results in a cleanroom setup and using advanced microbiological technology. We present results on the efficacy of AeroSense in natural settings through controlled and in-the-wild experiments. The ability to estimate aerosol emissions from detected human activities is part of a more extensive indoor air system integration, which can capture the rate of aerosol dissipation and inform users of airborne transmission risks in real time.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140984155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BreathPro: Monitoring Breathing Mode during Running with Earables BreathPro:使用耳机监测跑步时的呼吸模式
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659607
Changshuo Hu, Thivya Kandappu, Yang Liu, Cecilia Mascolo, Dong Ma
Running is a popular and accessible form of aerobic exercise, significantly benefiting our health and wellness. By monitoring a range of running parameters with wearable devices, runners can gain a deep understanding of their running behavior, facilitating performance improvement in future runs. Among these parameters, breathing, which fuels our bodies with oxygen and expels carbon dioxide, is crucial to improving the efficiency of running. While previous studies have made substantial progress in measuring breathing rate, exploration of additional breathing monitoring during running is still lacking. In this work, we fill this gap by presenting BreathPro, the first breathing mode monitoring system for running. It leverages the in-ear microphone on earables to record breathing sounds and combines the out-ear microphone on the same device to mitigate external noises, thereby enhancing the clarity of in-ear breathing sounds. BreathPro incorporates a suite of well-designed signal processing and machine learning techniques to enable breathing mode detection with superior accuracy. We implemented BreathPro as a smartphone application and demonstrated its energy-efficient and real-time execution.
跑步是一种既普及又容易接受的有氧运动,对我们的健康和保健大有裨益。通过使用可穿戴设备监测一系列跑步参数,跑步者可以深入了解自己的跑步行为,从而有助于在今后的跑步中提高成绩。在这些参数中,呼吸是提高跑步效率的关键,它为我们的身体提供氧气并排出二氧化碳。虽然之前的研究在测量呼吸频率方面取得了重大进展,但对跑步过程中的其他呼吸监测的探索仍然缺乏。在这项工作中,我们推出了首个跑步呼吸模式监测系统 BreathPro,填补了这一空白。它利用耳机上的入耳式麦克风来记录呼吸声,并结合同一设备上的出耳式麦克风来减少外部噪音,从而提高入耳式呼吸声的清晰度。BreathPro 融合了一整套精心设计的信号处理和机器学习技术,能以极高的精度进行呼吸模式检测。我们将 BreathPro 作为一款智能手机应用进行了开发,并展示了其高能效和实时性。
{"title":"BreathPro: Monitoring Breathing Mode during Running with Earables","authors":"Changshuo Hu, Thivya Kandappu, Yang Liu, Cecilia Mascolo, Dong Ma","doi":"10.1145/3659607","DOIUrl":"https://doi.org/10.1145/3659607","url":null,"abstract":"Running is a popular and accessible form of aerobic exercise, significantly benefiting our health and wellness. By monitoring a range of running parameters with wearable devices, runners can gain a deep understanding of their running behavior, facilitating performance improvement in future runs. Among these parameters, breathing, which fuels our bodies with oxygen and expels carbon dioxide, is crucial to improving the efficiency of running. While previous studies have made substantial progress in measuring breathing rate, exploration of additional breathing monitoring during running is still lacking. In this work, we fill this gap by presenting BreathPro, the first breathing mode monitoring system for running. It leverages the in-ear microphone on earables to record breathing sounds and combines the out-ear microphone on the same device to mitigate external noises, thereby enhancing the clarity of in-ear breathing sounds. BreathPro incorporates a suite of well-designed signal processing and machine learning techniques to enable breathing mode detection with superior accuracy. We implemented BreathPro as a smartphone application and demonstrated its energy-efficient and real-time execution.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140985937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identify, Adapt, Persist 识别、适应、坚持
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659585
Jarrett G.W. Lee, Bongshin Lee, Soyoung Choi, JooYoung Seo, Eun Kyoung Choe
Personal health technologies (PHTs) often do not consider the accessibility needs of blind individuals, preventing access to their capabilities and data. However, despite the accessibility barriers, some blind individuals persistently use such systems and even express satisfaction with them. To obtain a deeper understanding of blind users' prolonged experiences in PHTs, we interviewed 11 individuals who continue to use such technologies, discussing and observing their past and current interactions with their systems. We report on usability issues blind users encounter and how they adapt to these situations, and theories for the persistent use of PHTs in the face of poor accessibility. We reflect on strategies to improve the accessibility and usability of PHTs for blind users, as well as ideas to aid the normalization of accessible features within these systems.
个人保健技术(PHTs)往往不考虑盲人的无障碍需求,使他们无法使用这些技术的功能和数据。然而,尽管存在无障碍障碍,一些盲人仍坚持使用这些系统,甚至对其表示满意。为了深入了解盲人用户长期使用公共卫生技术的经历,我们采访了 11 位继续使用此类技术的盲人,讨论并观察了他们过去和现在与系统的交互情况。我们报告了盲人用户遇到的可用性问题和他们如何适应这些情况,以及在无障碍环境不佳的情况下坚持使用 PHT 的理论。我们思考了如何改善公共卫生工具对盲人用户的无障碍性和可用性的策略,以及帮助这些系统中的无障碍功能正常化的想法。
{"title":"Identify, Adapt, Persist","authors":"Jarrett G.W. Lee, Bongshin Lee, Soyoung Choi, JooYoung Seo, Eun Kyoung Choe","doi":"10.1145/3659585","DOIUrl":"https://doi.org/10.1145/3659585","url":null,"abstract":"Personal health technologies (PHTs) often do not consider the accessibility needs of blind individuals, preventing access to their capabilities and data. However, despite the accessibility barriers, some blind individuals persistently use such systems and even express satisfaction with them. To obtain a deeper understanding of blind users' prolonged experiences in PHTs, we interviewed 11 individuals who continue to use such technologies, discussing and observing their past and current interactions with their systems. We report on usability issues blind users encounter and how they adapt to these situations, and theories for the persistent use of PHTs in the face of poor accessibility. We reflect on strategies to improve the accessibility and usability of PHTs for blind users, as well as ideas to aid the normalization of accessible features within these systems.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140984213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Fabrication of Multifunctional E-Textiles by Upcycling Waste Cotton Fabrics through Carbonization 通过碳化对废棉织物进行升级再造,设计并制造多功能电子纺织品
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659588
Irmandy Wicaksono, Aditi Maheshwari, Don Derek Haddad, Joe Paradiso, Andreea Danielescu
The merging of electronic materials and textiles has triggered the proliferation of wearables and interactive surfaces in the ubiquitous computing era. However, this leads to e-textile waste that is difficult to recycle and decompose. Instead, we demonstrate an eco-design approach to upcycle waste cotton fabrics into functional textile elements through carbonization without the need for additional materials. We identify optimal parameters for the carbonization process and develop encapsulation techniques to improve the response, durability, and washability of the carbonized textiles. We then configure these e-textiles into various 'design primitives' including sensors, interconnects, and heating elements, and evaluate their electromechanical properties against commercially available e-textiles. Using these primitives, we demonstrate several applications, including a haptic-transfer fabric, a joint-sensing wearable, and an intelligent sailcloth. Finally, we highlight how the sensors can be composted, re-carbonized and coated onto other fabrics, or repurposed into different sensors towards their end-of-life to promote a circular manufacturing process.
在无所不在的计算时代,电子材料与纺织品的融合引发了可穿戴设备和交互式表面的激增。然而,这导致电子纺织品废弃物难以回收和分解。相反,我们展示了一种生态设计方法,通过碳化将废弃棉织物升级为功能性纺织元素,而无需使用其他材料。我们确定了碳化过程的最佳参数,并开发了封装技术,以提高碳化纺织品的响应性、耐用性和耐洗性。然后,我们将这些电子纺织品配置为各种 "设计基元",包括传感器、互连器件和加热元件,并对照市售电子纺织品评估其机电特性。利用这些基元,我们演示了几种应用,包括触觉传递织物、关节感应可穿戴设备和智能帆布。最后,我们重点介绍了如何将传感器堆肥、重新碳化并涂覆到其他织物上,或在其报废时重新用于不同的传感器,以促进循环生产过程。
{"title":"Design and Fabrication of Multifunctional E-Textiles by Upcycling Waste Cotton Fabrics through Carbonization","authors":"Irmandy Wicaksono, Aditi Maheshwari, Don Derek Haddad, Joe Paradiso, Andreea Danielescu","doi":"10.1145/3659588","DOIUrl":"https://doi.org/10.1145/3659588","url":null,"abstract":"The merging of electronic materials and textiles has triggered the proliferation of wearables and interactive surfaces in the ubiquitous computing era. However, this leads to e-textile waste that is difficult to recycle and decompose. Instead, we demonstrate an eco-design approach to upcycle waste cotton fabrics into functional textile elements through carbonization without the need for additional materials. We identify optimal parameters for the carbonization process and develop encapsulation techniques to improve the response, durability, and washability of the carbonized textiles. We then configure these e-textiles into various 'design primitives' including sensors, interconnects, and heating elements, and evaluate their electromechanical properties against commercially available e-textiles. Using these primitives, we demonstrate several applications, including a haptic-transfer fabric, a joint-sensing wearable, and an intelligent sailcloth. Finally, we highlight how the sensors can be composted, re-carbonized and coated onto other fabrics, or repurposed into different sensors towards their end-of-life to promote a circular manufacturing process.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140985199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ECSkin: Tessellating Electrochromic Films for Reconfigurable On-skin Displays ECSkin:用于可重构皮肤显示器的棋盘格电致变色薄膜
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659613
Pin-Sung Ku, Shuwen Jiang, Wei-Hsin Wang, H. Kao
Emerging electrochromic (EC) materials have advanced the frontier of thin-film, low-power, and non-emissive display technologies. While suitable for wearable or textile-based applications, current EC display systems are manufactured in fixed, pre-designed patterns that hinder the potential of reconfigurable display technologies desired by on-skin interactions. To realize the customizable and scalable EC display for skin wear, this paper introduces ECSkin, a construction toolkit composed of modular EC films. Our approach enables reconfigurable designs that display customized patterns by arranging combinations of premade EC modules. An ECSkin device can pixelate patterns and expand the display area through tessellating congruent modules. We present the fabrication of flexible EC display modules with accessible materials and tools. We performed technical evaluations to characterize the electrochromic performance and conducted user evaluations to verify the toolkit's usability and feasibility. Two example applications demonstrate the adaptiveness of the modular display on different body locations and user scenarios.
新兴的电致变色(EC)材料推动了薄膜、低功耗和非辐射显示技术的发展。目前的电致变色显示系统虽然适用于可穿戴或基于纺织品的应用,但其制造模式是固定的、预先设计好的,这阻碍了皮肤互动所需的可重新配置显示技术的潜力。为了实现可定制和可扩展的皮肤穿戴式电子显微镜,本文介绍了由模块化电子显微镜薄膜组成的构造工具包 ECSkin。我们的方法实现了可重新配置的设计,通过排列组合预制的电子导电膜模块来显示定制的图案。ECSkin 设备可将图案像素化,并通过将相同的模块拼成棋盘格来扩大显示区域。我们介绍了利用可获得的材料和工具制作柔性电子导电体显示模块的方法。我们进行了技术评估以确定电致变色性能的特征,并进行了用户评估以验证工具包的可用性和可行性。两个应用实例展示了模块化显示屏在不同身体位置和用户场景下的适应性。
{"title":"ECSkin: Tessellating Electrochromic Films for Reconfigurable On-skin Displays","authors":"Pin-Sung Ku, Shuwen Jiang, Wei-Hsin Wang, H. Kao","doi":"10.1145/3659613","DOIUrl":"https://doi.org/10.1145/3659613","url":null,"abstract":"Emerging electrochromic (EC) materials have advanced the frontier of thin-film, low-power, and non-emissive display technologies. While suitable for wearable or textile-based applications, current EC display systems are manufactured in fixed, pre-designed patterns that hinder the potential of reconfigurable display technologies desired by on-skin interactions. To realize the customizable and scalable EC display for skin wear, this paper introduces ECSkin, a construction toolkit composed of modular EC films. Our approach enables reconfigurable designs that display customized patterns by arranging combinations of premade EC modules. An ECSkin device can pixelate patterns and expand the display area through tessellating congruent modules. We present the fabrication of flexible EC display modules with accessible materials and tools. We performed technical evaluations to characterize the electrochromic performance and conducted user evaluations to verify the toolkit's usability and feasibility. Two example applications demonstrate the adaptiveness of the modular display on different body locations and user scenarios.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140984340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Let It Snow: Designing Snowfall Experience in VR 让它下雪吧在 VR 中设计降雪体验
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659587
Haokun Wang, Yatharth Singhal, Jin Ryong Kim
We present Snow, a cross-modal interface that integrates cold and tactile stimuli in mid-air to create snowflakes and raindrops for VR experiences. Snow uses six Peltier packs and an ultrasound haptic display to create unique cold-tactile sensations for users to experience catching snowflakes and getting rained on their bare hands. Our approach considers humans' ability to identify tactile and cold stimuli without masking each other when projected onto the same location on their skin, making illusions of snowflakes and raindrops. We design both visual and haptic renderings to be tightly coupled to present snow melting and rain droplets for realistic visuo-tactile experiences. For multiple snowflakes and raindrops rendering, we propose an aggregated haptic scheme to simulate heavy snowfall and rainfall environments with many visual particles. The results show that the aggregated haptic rendering scheme demonstrates a more realistic experience than other schemes. We also confirm that our approach of providing cold-tactile cues enhances the user experiences in both conditions compared to other modality conditions.
我们展示的 "雪 "是一种跨模态界面,它在半空中整合了冷刺激和触觉刺激,为 VR 体验创造了雪花和雨滴。雪 "使用六个珀尔帖电池组和一个超声波触觉显示屏来创造独特的冷触觉,让用户体验徒手接雪花和被雨淋的感觉。我们的方法考虑到了人类识别触觉和冷刺激的能力,当投射到皮肤的同一位置时,触觉和冷刺激不会相互掩盖,从而产生雪花和雨滴的幻觉。我们将视觉和触觉渲染紧密结合起来,以呈现雪花融化和雨滴的逼真视觉-触觉体验。对于多个雪花和雨滴的渲染,我们提出了一种聚合触觉方案,以模拟有许多视觉颗粒的大雪和降雨环境。结果表明,与其他方案相比,聚合触觉渲染方案能带来更逼真的体验。我们还证实,与其他模态条件相比,我们提供冷触觉线索的方法增强了用户在这两种条件下的体验。
{"title":"Let It Snow: Designing Snowfall Experience in VR","authors":"Haokun Wang, Yatharth Singhal, Jin Ryong Kim","doi":"10.1145/3659587","DOIUrl":"https://doi.org/10.1145/3659587","url":null,"abstract":"We present Snow, a cross-modal interface that integrates cold and tactile stimuli in mid-air to create snowflakes and raindrops for VR experiences. Snow uses six Peltier packs and an ultrasound haptic display to create unique cold-tactile sensations for users to experience catching snowflakes and getting rained on their bare hands. Our approach considers humans' ability to identify tactile and cold stimuli without masking each other when projected onto the same location on their skin, making illusions of snowflakes and raindrops. We design both visual and haptic renderings to be tightly coupled to present snow melting and rain droplets for realistic visuo-tactile experiences. For multiple snowflakes and raindrops rendering, we propose an aggregated haptic scheme to simulate heavy snowfall and rainfall environments with many visual particles. The results show that the aggregated haptic rendering scheme demonstrates a more realistic experience than other schemes. We also confirm that our approach of providing cold-tactile cues enhances the user experiences in both conditions compared to other modality conditions.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140982987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Classification to Clinical Insights 从分类到临床启示
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659604
Zachary Englhardt, Chengqian Ma, Margaret E. Morris, Chun-Cheng Chang, Xuhai "Orson" Xu, Lianhui Qin, Daniel McDuff, Xin Liu, Shwetak Patel, Vikram Iyer
Passively collected behavioral health data from ubiquitous sensors could provide mental health professionals valuable insights into patient's daily lives, but such efforts are impeded by disparate metrics, lack of interoperability, and unclear correlations between the measured signals and an individual's mental health. To address these challenges, we pioneer the exploration of large language models (LLMs) to synthesize clinically relevant insights from multi-sensor data. We develop chain-of-thought prompting methods to generate LLM reasoning on how data pertaining to activity, sleep and social interaction relate to conditions such as depression and anxiety. We then prompt the LLM to perform binary classification, achieving accuracies of 61.1%, exceeding the state of the art. We find models like GPT-4 correctly reference numerical data 75% of the time. While we began our investigation by developing methods to use LLMs to output binary classifications for conditions like depression, we find instead that their greatest potential value to clinicians lies not in diagnostic classification, but rather in rigorous analysis of diverse self-tracking data to generate natural language summaries that synthesize multiple data streams and identify potential concerns. Clinicians envisioned using these insights in a variety of ways, principally for fostering collaborative investigation with patients to strengthen the therapeutic alliance and guide treatment. We describe this collaborative engagement, additional envisioned uses, and associated concerns that must be addressed before adoption in real-world contexts.
从无处不在的传感器中被动收集到的行为健康数据可为心理健康专业人员提供有关病人日常生活的宝贵见解,但由于衡量标准不同、缺乏互操作性以及测量信号与个人心理健康之间的相关性不明确,这些工作受到了阻碍。为了应对这些挑战,我们率先探索了大型语言模型(LLM),以便从多传感器数据中综合出与临床相关的见解。我们开发了思维链提示方法,以生成 LLM 推理,说明有关活动、睡眠和社交互动的数据如何与抑郁和焦虑等症状相关。然后,我们促使 LLM 执行二元分类,准确率达到 61.1%,超过了目前的技术水平。我们发现,GPT-4 等模型在 75% 的情况下都能正确引用数字数据。虽然我们一开始是通过开发方法来使用 LLMs 对抑郁症等疾病进行二元分类,但我们发现,LLMs 对临床医生的最大潜在价值不在于诊断分类,而在于对各种自我跟踪数据进行严格分析,以生成自然语言摘要,综合多个数据流并识别潜在问题。临床医生设想以多种方式利用这些见解,主要用于促进与患者的合作调查,以加强治疗联盟并指导治疗。我们将介绍这种协作参与、其他设想用途以及在实际应用前必须解决的相关问题。
{"title":"From Classification to Clinical Insights","authors":"Zachary Englhardt, Chengqian Ma, Margaret E. Morris, Chun-Cheng Chang, Xuhai \"Orson\" Xu, Lianhui Qin, Daniel McDuff, Xin Liu, Shwetak Patel, Vikram Iyer","doi":"10.1145/3659604","DOIUrl":"https://doi.org/10.1145/3659604","url":null,"abstract":"Passively collected behavioral health data from ubiquitous sensors could provide mental health professionals valuable insights into patient's daily lives, but such efforts are impeded by disparate metrics, lack of interoperability, and unclear correlations between the measured signals and an individual's mental health. To address these challenges, we pioneer the exploration of large language models (LLMs) to synthesize clinically relevant insights from multi-sensor data. We develop chain-of-thought prompting methods to generate LLM reasoning on how data pertaining to activity, sleep and social interaction relate to conditions such as depression and anxiety. We then prompt the LLM to perform binary classification, achieving accuracies of 61.1%, exceeding the state of the art. We find models like GPT-4 correctly reference numerical data 75% of the time.\u0000 While we began our investigation by developing methods to use LLMs to output binary classifications for conditions like depression, we find instead that their greatest potential value to clinicians lies not in diagnostic classification, but rather in rigorous analysis of diverse self-tracking data to generate natural language summaries that synthesize multiple data streams and identify potential concerns. Clinicians envisioned using these insights in a variety of ways, principally for fostering collaborative investigation with patients to strengthen the therapeutic alliance and guide treatment. We describe this collaborative engagement, additional envisioned uses, and associated concerns that must be addressed before adoption in real-world contexts.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140985152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SonicVista: Towards Creating Awareness of Distant Scenes through Sonification SonicVista:通过声学技术实现对遥远场景的感知
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659609
Chitralekha Gupta, Shreyas Sridhar, Denys J. C. Matthies, Christophe Jouffrais, Suranga Nanayakkara
Spatial awareness, particularly awareness of distant environmental scenes known as vista-space, is crucial and contributes to the cognitive and aesthetic needs of People with Visual Impairments (PVI). In this work, through a formative study with PVIs, we establish the need for vista-space awareness amongst people with visual impairments, and the possible scenarios where this awareness would be helpful. We investigate the potential of existing sonification techniques as well as AI-based audio generative models to design sounds that can create awareness of vista-space scenes. Our first user study, consisting of a listening test with sighted participants as well as PVIs, suggests that current AI generative models for audio have the potential to produce sounds that are comparable to existing sonification techniques in communicating sonic objects and scenes in terms of their intuitiveness, and learnability. Furthermore, through a wizard-of-oz study with PVIs, we demonstrate the utility of AI-generated sounds as well as scene audio recordings as auditory icons to provide vista-scene awareness, in the contexts of navigation and leisure. This is the first step towards addressing the need for vista-space awareness and experience in PVIs.
空间意识,尤其是对远处环境场景的意识(即远景空间)至关重要,有助于满足视障人士的认知和审美需求。在这项工作中,我们通过对视障人士的形成性研究,确定了视障人士对远景空间感知的需求,以及这种感知可能有所帮助的场景。我们研究了现有声学技术的潜力,以及基于人工智能的音频生成模型,以设计出能让人意识到视野空间场景的声音。我们的第一项用户研究包括对视力正常的参与者和视障人士进行听力测试,结果表明,当前的人工智能音频生成模型在直观性和可学习性方面有潜力生成与现有声学技术相媲美的声音,用于传达声学对象和场景。此外,通过对 PVI 进行向导式研究,我们证明了人工智能生成的声音以及场景音频录音作为听觉图标的实用性,可在导航和休闲环境中提供视野场景感知。这是解决 PVI 对视野空间认知和体验需求的第一步。
{"title":"SonicVista: Towards Creating Awareness of Distant Scenes through Sonification","authors":"Chitralekha Gupta, Shreyas Sridhar, Denys J. C. Matthies, Christophe Jouffrais, Suranga Nanayakkara","doi":"10.1145/3659609","DOIUrl":"https://doi.org/10.1145/3659609","url":null,"abstract":"Spatial awareness, particularly awareness of distant environmental scenes known as vista-space, is crucial and contributes to the cognitive and aesthetic needs of People with Visual Impairments (PVI). In this work, through a formative study with PVIs, we establish the need for vista-space awareness amongst people with visual impairments, and the possible scenarios where this awareness would be helpful. We investigate the potential of existing sonification techniques as well as AI-based audio generative models to design sounds that can create awareness of vista-space scenes. Our first user study, consisting of a listening test with sighted participants as well as PVIs, suggests that current AI generative models for audio have the potential to produce sounds that are comparable to existing sonification techniques in communicating sonic objects and scenes in terms of their intuitiveness, and learnability. Furthermore, through a wizard-of-oz study with PVIs, we demonstrate the utility of AI-generated sounds as well as scene audio recordings as auditory icons to provide vista-scene awareness, in the contexts of navigation and leisure. This is the first step towards addressing the need for vista-space awareness and experience in PVIs.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140982859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1