Recent machine learning research on smart cities has achieved great success in predicting future trends, under the key assumption that the test data follows the same distribution of the training data. The rapid urbanization, however, makes this assumption challenging to hold in practice. Because new data is emerging from new environments (e.g., an emerging city or region), which may follow different distributions from data in existing environments. Different from transfer-learning methods accessing target data during training, we often do not have any prior knowledge about the new environment. Therefore, it is critical to explore a predictive model that can be effectively adapted to unseen new environments. This work aims to address this Out-of-Distribution (OOD) challenge for sustainable cities. We propose to identify two kinds of features that are useful for OOD prediction in each environment: (1) the environment-invariant features to capture the shared commonalities for predictions across different environments; and (2) the environment-aware features to characterize the unique information of each environment. Take bike riding as an example. The bike demands of different cities often follow the same pattern that they significantly increase during the rush hour on workdays. Meanwhile, there are also some local patterns in each city because of different cultures and citizens' travel preferences. We introduce a principled framework -- sUrban -- that consists of an environment-invariant optimization module for learning invariant representation and an environment-aware optimization module for learning environment-aware representation. Evaluation on real-world datasets from various urban application domains corroborates the generalizability of sUrban. This work opens up new avenues to smart city development.
{"title":"sUrban","authors":"Qianru Wang, Bin Guo, Lu Cheng, Zhiwen Yu","doi":"10.1145/3610877","DOIUrl":"https://doi.org/10.1145/3610877","url":null,"abstract":"Recent machine learning research on smart cities has achieved great success in predicting future trends, under the key assumption that the test data follows the same distribution of the training data. The rapid urbanization, however, makes this assumption challenging to hold in practice. Because new data is emerging from new environments (e.g., an emerging city or region), which may follow different distributions from data in existing environments. Different from transfer-learning methods accessing target data during training, we often do not have any prior knowledge about the new environment. Therefore, it is critical to explore a predictive model that can be effectively adapted to unseen new environments. This work aims to address this Out-of-Distribution (OOD) challenge for sustainable cities. We propose to identify two kinds of features that are useful for OOD prediction in each environment: (1) the environment-invariant features to capture the shared commonalities for predictions across different environments; and (2) the environment-aware features to characterize the unique information of each environment. Take bike riding as an example. The bike demands of different cities often follow the same pattern that they significantly increase during the rush hour on workdays. Meanwhile, there are also some local patterns in each city because of different cultures and citizens' travel preferences. We introduce a principled framework -- sUrban -- that consists of an environment-invariant optimization module for learning invariant representation and an environment-aware optimization module for learning environment-aware representation. Evaluation on real-world datasets from various urban application domains corroborates the generalizability of sUrban. This work opens up new avenues to smart city development.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Wang, Xingwei Wang, Dalin Zhang, Xiaolei Ma, Yong Zhang, Haipeng Dai, Chenren Xu, Zhijun Li, Tao Gu
Electrocardiogram (ECG) monitoring has been widely explored in detecting and diagnosing cardiovascular diseases due to its accuracy, simplicity, and sensitivity. However, medical- or commercial-grade ECG monitoring devices can be costly for people who want to monitor their ECG on a daily basis. These devices typically require several electrodes to be attached to the human body which is inconvenient for continuous monitoring. To enable low-cost measurement of ECG signals with off-the-shelf devices on a daily basis, in this paper, we propose a novel ECG sensing system that uses acceleration data collected from a smartphone. Our system offers several advantages over previous systems, including low cost, ease of use, location and user independence, and high accuracy. We design a two-tiered denoising process, comprising SWT and Soft-Thresholding, to effectively eliminate interference caused by respiration, body, and hand movements. Finally, we develop a multi-level deep learning recovery model to achieve efficient, real-time and user-independent ECG measurement on commercial mobile phones. We conduct extensive experiments with 30 participants (with nearly 36,000 heartbeat samples) under a user-independent scenario. The average errors of the PR interval, QRS interval, QT interval, and RR interval are 12.02 ms, 16.9 ms, 16.64 ms, and 1.84 ms, respectively. As a case study, we also demonstrate the strong capability of our system in signal recovery for patients with common heart diseases, including tachycardia, bradycardia, arrhythmia, unstable angina, and myocardial infarction.
{"title":"Knowing Your Heart Condition Anytime","authors":"Lei Wang, Xingwei Wang, Dalin Zhang, Xiaolei Ma, Yong Zhang, Haipeng Dai, Chenren Xu, Zhijun Li, Tao Gu","doi":"10.1145/3610871","DOIUrl":"https://doi.org/10.1145/3610871","url":null,"abstract":"Electrocardiogram (ECG) monitoring has been widely explored in detecting and diagnosing cardiovascular diseases due to its accuracy, simplicity, and sensitivity. However, medical- or commercial-grade ECG monitoring devices can be costly for people who want to monitor their ECG on a daily basis. These devices typically require several electrodes to be attached to the human body which is inconvenient for continuous monitoring. To enable low-cost measurement of ECG signals with off-the-shelf devices on a daily basis, in this paper, we propose a novel ECG sensing system that uses acceleration data collected from a smartphone. Our system offers several advantages over previous systems, including low cost, ease of use, location and user independence, and high accuracy. We design a two-tiered denoising process, comprising SWT and Soft-Thresholding, to effectively eliminate interference caused by respiration, body, and hand movements. Finally, we develop a multi-level deep learning recovery model to achieve efficient, real-time and user-independent ECG measurement on commercial mobile phones. We conduct extensive experiments with 30 participants (with nearly 36,000 heartbeat samples) under a user-independent scenario. The average errors of the PR interval, QRS interval, QT interval, and RR interval are 12.02 ms, 16.9 ms, 16.64 ms, and 1.84 ms, respectively. As a case study, we also demonstrate the strong capability of our system in signal recovery for patients with common heart diseases, including tachycardia, bradycardia, arrhythmia, unstable angina, and myocardial infarction.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Long-term exposure to stress hurts human's mental and even physical health,and stress monitoring is of increasing significance in the prevention, diagnosis, and management of mental illness and chronic disease. However, current stress monitoring methods are either burdensome or intrusive, which hinders their widespread usage in practice. In this paper, we propose mmStress, a contact-less and non-intrusive solution, which adopts a millimeter-wave radar to sense a subject's activities of daily living, from which it distills human stress. mmStress is built upon the psychologically-validated relationship between human stress and "displacement activities", i.e., subjects under stress unconsciously perform fidgeting behaviors like scratching, wandering around, tapping foot, etc. Despite the conceptual simplicity, to realize mmStress, the key challenge lies in how to identify and quantify the latent displacement activities autonomously, as they are usually transitory and submerged in normal daily activities, and also exhibit high variation across different subjects. To address these challenges, we custom-design a neural network that learns human activities from both macro and micro timescales and exploits the continuity of human activities to extract features of abnormal displacement activities accurately. Moreover, we also address the unbalance stress distribution issue by incorporating a post-hoc logit adjustment procedure during model training. We prototype, deploy and evaluate mmStress in ten volunteers' apartments for over four weeks, and the results show that mmStress achieves a promising accuracy of ~80% in classifying low, medium and high stress. In particular, mmStress manifests advantages, particularly under free human movement scenarios, which advances the state-of-the-art that focuses on stress monitoring in quasi-static scenarios.
{"title":"mmStress","authors":"Kun Liang, Anfu Zhou, Zhan Zhang, Hao Zhou, Huadong Ma, Chenshu Wu","doi":"10.1145/3610926","DOIUrl":"https://doi.org/10.1145/3610926","url":null,"abstract":"Long-term exposure to stress hurts human's mental and even physical health,and stress monitoring is of increasing significance in the prevention, diagnosis, and management of mental illness and chronic disease. However, current stress monitoring methods are either burdensome or intrusive, which hinders their widespread usage in practice. In this paper, we propose mmStress, a contact-less and non-intrusive solution, which adopts a millimeter-wave radar to sense a subject's activities of daily living, from which it distills human stress. mmStress is built upon the psychologically-validated relationship between human stress and \"displacement activities\", i.e., subjects under stress unconsciously perform fidgeting behaviors like scratching, wandering around, tapping foot, etc. Despite the conceptual simplicity, to realize mmStress, the key challenge lies in how to identify and quantify the latent displacement activities autonomously, as they are usually transitory and submerged in normal daily activities, and also exhibit high variation across different subjects. To address these challenges, we custom-design a neural network that learns human activities from both macro and micro timescales and exploits the continuity of human activities to extract features of abnormal displacement activities accurately. Moreover, we also address the unbalance stress distribution issue by incorporating a post-hoc logit adjustment procedure during model training. We prototype, deploy and evaluate mmStress in ten volunteers' apartments for over four weeks, and the results show that mmStress achieves a promising accuracy of ~80% in classifying low, medium and high stress. In particular, mmStress manifests advantages, particularly under free human movement scenarios, which advances the state-of-the-art that focuses on stress monitoring in quasi-static scenarios.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingwen Zhang, Ruixuan Dai, Ashraf Rjob, Ruiqi Wang, Reshad Hamauon, Jeffrey Candell, Thomas Bailey, Victoria J. Fraser, Maria Cristina Vazquez Guillamet, Chenyang Lu
Contact tracing is a powerful tool for mitigating the spread of COVID-19 during the pandemic. Front-line healthcare workers are particularly at high risk of infection in hospital units. This paper presents ContAct TraCing for Hospitals (CATCH), an automated contact tracing system designed specifically for healthcare workers in hospital environments. CATCH employs distributed embedded devices placed throughout a hospital unit to detect close contacts among healthcare workers wearing Bluetooth Low Energy (BLE) beacons. We first identify a set of distinct contact tracing scenarios based on the diverse environmental characteristics of a real-world intensive care unit (ICU) and the different working patterns of healthcare workers in different spaces within the unit. We then develop a suite of novel contact tracing methods tailored for each scenario. CATCH has been deployed and evaluated in the ICU of a major medical center, demonstrating superior accuracy in contact tracing over existing approaches through a wide range of experiments. Furthermore, the real-world case study highlights the effectiveness and efficiency of CATCH compared to standard contact tracing practices.
{"title":"Contact Tracing for Healthcare Workers in an Intensive Care Unit","authors":"Jingwen Zhang, Ruixuan Dai, Ashraf Rjob, Ruiqi Wang, Reshad Hamauon, Jeffrey Candell, Thomas Bailey, Victoria J. Fraser, Maria Cristina Vazquez Guillamet, Chenyang Lu","doi":"10.1145/3610924","DOIUrl":"https://doi.org/10.1145/3610924","url":null,"abstract":"Contact tracing is a powerful tool for mitigating the spread of COVID-19 during the pandemic. Front-line healthcare workers are particularly at high risk of infection in hospital units. This paper presents ContAct TraCing for Hospitals (CATCH), an automated contact tracing system designed specifically for healthcare workers in hospital environments. CATCH employs distributed embedded devices placed throughout a hospital unit to detect close contacts among healthcare workers wearing Bluetooth Low Energy (BLE) beacons. We first identify a set of distinct contact tracing scenarios based on the diverse environmental characteristics of a real-world intensive care unit (ICU) and the different working patterns of healthcare workers in different spaces within the unit. We then develop a suite of novel contact tracing methods tailored for each scenario. CATCH has been deployed and evaluated in the ICU of a major medical center, demonstrating superior accuracy in contact tracing over existing approaches through a wide range of experiments. Furthermore, the real-world case study highlights the effectiveness and efficiency of CATCH compared to standard contact tracing practices.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Personal informatics (PI) systems are designed for diverse users in the real world. Even when these systems are usable, people encounter barriers while engaging with them in ways designers cannot anticipate, which impacts the system's effectiveness. Although PI literature extensively reports such barriers, the volume of this information can be overwhelming. Researchers and practitioners often find themselves repeatedly addressing the same challenges since sifting through this enormous volume of knowledge looking for relevant insights is often infeasible. We contribute to alleviating this issue by conducting a meta-synthesis of the PI literature and categorizing people's barriers and facilitators to engagement with PI systems into eight themes. Based on the synthesized knowledge, we discuss specific generalizable barriers and paths for further investigations. This synthesis can serve as an index to identify barriers pertinent to each application domain and possibly to identify barriers from one domain that might apply to a different domain. Finally, to ensure the sustainability of the syntheses, we propose a Design Statements (DS) block for research articles.
{"title":"A Meta-Synthesis of the Barriers and Facilitators for Personal Informatics Systems","authors":"Kazi Sinthia Kabir, Jason Wiese","doi":"10.1145/3610893","DOIUrl":"https://doi.org/10.1145/3610893","url":null,"abstract":"Personal informatics (PI) systems are designed for diverse users in the real world. Even when these systems are usable, people encounter barriers while engaging with them in ways designers cannot anticipate, which impacts the system's effectiveness. Although PI literature extensively reports such barriers, the volume of this information can be overwhelming. Researchers and practitioners often find themselves repeatedly addressing the same challenges since sifting through this enormous volume of knowledge looking for relevant insights is often infeasible. We contribute to alleviating this issue by conducting a meta-synthesis of the PI literature and categorizing people's barriers and facilitators to engagement with PI systems into eight themes. Based on the synthesized knowledge, we discuss specific generalizable barriers and paths for further investigations. This synthesis can serve as an index to identify barriers pertinent to each application domain and possibly to identify barriers from one domain that might apply to a different domain. Finally, to ensure the sustainability of the syntheses, we propose a Design Statements (DS) block for research articles.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steeven Villa, Jasmin Niess, Albrecht Schmidt, Robin Welsch
Human augmentation technologies (ATs) are a subset of ubiquitous on-body devices designed to improve cognitive, sensory, and motor capacities. Although there is a large corpus of knowledge concerning ATs, less is known about societal attitudes towards them and how they shift over time. To that end, we developed The Society's Attitudes Towards Human Augmentation and Performance Enhancement Technologies (SHAPE) Scale, which measures how users of ATs are perceived. To develop the scale, we first created a list of possible scale items based on past work on how people respond to new technologies. The items were then reviewed by experts. Next, we performed exploratory factor analysis to reduce the scale to its final length of thirteen items. Subsequently, we confirmed test-retest validity of our instrument, as well as its construct validity. The SHAPE scale enables researchers and practitioners to understand elements contributing to attitudes toward augmentation technology users. The SHAPE scale assists designers of ATs in designing artifacts that will be more universally accepted.
{"title":"Society's Attitudes Towards Human Augmentation and Performance Enhancement Technologies (SHAPE) Scale","authors":"Steeven Villa, Jasmin Niess, Albrecht Schmidt, Robin Welsch","doi":"10.1145/3610915","DOIUrl":"https://doi.org/10.1145/3610915","url":null,"abstract":"Human augmentation technologies (ATs) are a subset of ubiquitous on-body devices designed to improve cognitive, sensory, and motor capacities. Although there is a large corpus of knowledge concerning ATs, less is known about societal attitudes towards them and how they shift over time. To that end, we developed The Society's Attitudes Towards Human Augmentation and Performance Enhancement Technologies (SHAPE) Scale, which measures how users of ATs are perceived. To develop the scale, we first created a list of possible scale items based on past work on how people respond to new technologies. The items were then reviewed by experts. Next, we performed exploratory factor analysis to reduce the scale to its final length of thirteen items. Subsequently, we confirmed test-retest validity of our instrument, as well as its construct validity. The SHAPE scale enables researchers and practitioners to understand elements contributing to attitudes toward augmentation technology users. The SHAPE scale assists designers of ATs in designing artifacts that will be more universally accepted.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Medeiros, Romane Dubus, Julie Williamson, Graham Wilson, Katharina Pöhlmann, Mark McGill
Augmented Reality (AR) headsets could significantly improve the passenger experience, freeing users from the restrictions of physical smartphones, tablets and seatback displays. However, the confined space of public transport and the varying proximity to other passengers may restrict what interaction techniques are deemed socially acceptable for AR users - particularly considering current reliance on mid-air interactions in consumer headsets. We contribute and utilize a novel approach to social acceptability video surveys, employing mixed reality composited videos to present a real user performing interactions across different virtual transport environments. This approach allows for controlled evaluation of perceived social acceptability whilst freeing researchers to present interactions in any simulated context. Our resulting survey (N=131) explores the social comfort of body, device, and environment-based interactions across seven transit seating arrangements. We reflect on the advantages of discreet inputs over mid-air and the unique challenges of face-to-face seating for passenger AR.
{"title":"Surveying the Social Comfort of Body, Device, and Environment-Based Augmented Reality Interactions in Confined Passenger Spaces Using Mixed Reality Composite Videos","authors":"Daniel Medeiros, Romane Dubus, Julie Williamson, Graham Wilson, Katharina Pöhlmann, Mark McGill","doi":"10.1145/3610923","DOIUrl":"https://doi.org/10.1145/3610923","url":null,"abstract":"Augmented Reality (AR) headsets could significantly improve the passenger experience, freeing users from the restrictions of physical smartphones, tablets and seatback displays. However, the confined space of public transport and the varying proximity to other passengers may restrict what interaction techniques are deemed socially acceptable for AR users - particularly considering current reliance on mid-air interactions in consumer headsets. We contribute and utilize a novel approach to social acceptability video surveys, employing mixed reality composited videos to present a real user performing interactions across different virtual transport environments. This approach allows for controlled evaluation of perceived social acceptability whilst freeing researchers to present interactions in any simulated context. Our resulting survey (N=131) explores the social comfort of body, device, and environment-based interactions across seven transit seating arrangements. We reflect on the advantages of discreet inputs over mid-air and the unique challenges of face-to-face seating for passenger AR.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nuwan Janaka, Jie Gao, Lin Zhu, Shengdong Zhao, Lan Lyu, Peisen Xu, Maximilian Nabokow, Silang Wang, Yanch Ong
Communicating with others while engaging in simple daily activities is both common and natural for people. However, due to the hands- and eyes-busy nature of existing digital messaging applications, it is challenging to message someone while performing simple daily activities. We present GlassMessaging, a messaging application on Optical See-Through Head-Mounted Displays (OHMDs), to support messaging with voice and manual inputs in hands- and eyes-busy scenarios. GlassMessaging is iteratively developed through a formative study identifying current messaging behaviors and challenges in common multitasking with messaging scenarios. We then evaluated this application against the mobile phone platform on varying texting complexities in eating and walking scenarios. Our results showed that, compared to phone-based messaging, GlassMessaging increased messaging opportunities during multitasking due to its hands-free, wearable nature, and multimodal input capabilities. The affordance of GlassMessaging also allows users easier access to voice input than the phone, which thus reduces the response time by 33.1% and increases the texting speed by 40.3%, with a cost in texting accuracy of 2.5%, particularly when the texting complexity increases. Lastly, we discuss trade-offs and insights to lay a foundation for future OHMD-based messaging applications.
{"title":"GlassMessaging","authors":"Nuwan Janaka, Jie Gao, Lin Zhu, Shengdong Zhao, Lan Lyu, Peisen Xu, Maximilian Nabokow, Silang Wang, Yanch Ong","doi":"10.1145/3610931","DOIUrl":"https://doi.org/10.1145/3610931","url":null,"abstract":"Communicating with others while engaging in simple daily activities is both common and natural for people. However, due to the hands- and eyes-busy nature of existing digital messaging applications, it is challenging to message someone while performing simple daily activities. We present GlassMessaging, a messaging application on Optical See-Through Head-Mounted Displays (OHMDs), to support messaging with voice and manual inputs in hands- and eyes-busy scenarios. GlassMessaging is iteratively developed through a formative study identifying current messaging behaviors and challenges in common multitasking with messaging scenarios. We then evaluated this application against the mobile phone platform on varying texting complexities in eating and walking scenarios. Our results showed that, compared to phone-based messaging, GlassMessaging increased messaging opportunities during multitasking due to its hands-free, wearable nature, and multimodal input capabilities. The affordance of GlassMessaging also allows users easier access to voice input than the phone, which thus reduces the response time by 33.1% and increases the texting speed by 40.3%, with a cost in texting accuracy of 2.5%, particularly when the texting complexity increases. Lastly, we discuss trade-offs and insights to lay a foundation for future OHMD-based messaging applications.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135471528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gwangbin Kim, Dohyeon Yeo, Taewoo Jo, Daniela Rus, SeungJun Kim
Explanations in automated vehicles help passengers understand the vehicle's state and capabilities, leading to increased trust in the technology. Specifically, for passengers of SAE Level 4 and 5 vehicles who are not engaged in the driving process, the enhanced sense of control provided by explanations reduces potential anxieties, enabling them to fully leverage the benefits of automation. To construct explanations that enhance trust and situational awareness without disturbing passengers, we suggest testing with people who ultimately employ such explanations, ideally under real-world driving conditions. In this study, we examined the impact of various visual explanation types (perception, attention, perception+attention) and timing mechanisms (constantly provided or only under risky scenarios) on passenger experience under naturalistic driving scenarios using actual vehicles with mixed-reality support. Our findings indicate that visualizing the vehicle's perception state improves the perceived usability, trust, safety, and situational awareness without adding cognitive burden, even without explaining the underlying causes. We also demonstrate that the traffic risk probability could be used to control the timing of an explanation delivery, particularly when passengers are overwhelmed with information. Our study's on-road evaluation method offers a safe and reliable testing environment and can be easily customized for other AI models and explanation modalities.
{"title":"What and When to Explain?","authors":"Gwangbin Kim, Dohyeon Yeo, Taewoo Jo, Daniela Rus, SeungJun Kim","doi":"10.1145/3610886","DOIUrl":"https://doi.org/10.1145/3610886","url":null,"abstract":"Explanations in automated vehicles help passengers understand the vehicle's state and capabilities, leading to increased trust in the technology. Specifically, for passengers of SAE Level 4 and 5 vehicles who are not engaged in the driving process, the enhanced sense of control provided by explanations reduces potential anxieties, enabling them to fully leverage the benefits of automation. To construct explanations that enhance trust and situational awareness without disturbing passengers, we suggest testing with people who ultimately employ such explanations, ideally under real-world driving conditions. In this study, we examined the impact of various visual explanation types (perception, attention, perception+attention) and timing mechanisms (constantly provided or only under risky scenarios) on passenger experience under naturalistic driving scenarios using actual vehicles with mixed-reality support. Our findings indicate that visualizing the vehicle's perception state improves the perceived usability, trust, safety, and situational awareness without adding cognitive burden, even without explaining the underlying causes. We also demonstrate that the traffic risk probability could be used to control the timing of an explanation delivery, particularly when passengers are overwhelmed with information. Our study's on-road evaluation method offers a safe and reliable testing environment and can be easily customized for other AI models and explanation modalities.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiha Kim, Younho Nam, Jungeun Lee, Young-Joo Suh, Inseok Hwang
Although many works bring exercise monitoring to smartphone and smartwatch, inertial sensors used in such systems require device to be in motion to detect exercises. We introduce ProxiFit, a highly practical on-device exercise monitoring system capable of classifying and counting exercises even if the device stays still. Utilizing novel proximity sensing of natural magnetism in exercise equipment, ProxiFit brings (1) a new category of exercise not involving device motion such as lower-body machine exercise, and (2) a new off-body exercise monitoring mode where a smartphone can be conveniently viewed in front of the user during workouts. ProxiFit addresses common issues of faint magnetic sensing by choosing appropriate preprocessing, negating adversarial motion artifacts, and designing a lightweight yet noise-tolerant classifier. Also, application-specific challenges such as a wide variety of equipment and the impracticality of obtaining large datasets are overcome by devising a unique yet challenging training policy. We evaluate ProxiFit on up to 10 weight machines (5 lower- and 5 upper-body) and 4 free-weight exercises, on both wearable and signage mode, with 19 users, at 3 gyms, over 14 months, and verify robustness against user and weather variations, spatial and rotational device location deviations, and neighboring machine interference.
{"title":"ProxiFit","authors":"Jiha Kim, Younho Nam, Jungeun Lee, Young-Joo Suh, Inseok Hwang","doi":"10.1145/3610920","DOIUrl":"https://doi.org/10.1145/3610920","url":null,"abstract":"Although many works bring exercise monitoring to smartphone and smartwatch, inertial sensors used in such systems require device to be in motion to detect exercises. We introduce ProxiFit, a highly practical on-device exercise monitoring system capable of classifying and counting exercises even if the device stays still. Utilizing novel proximity sensing of natural magnetism in exercise equipment, ProxiFit brings (1) a new category of exercise not involving device motion such as lower-body machine exercise, and (2) a new off-body exercise monitoring mode where a smartphone can be conveniently viewed in front of the user during workouts. ProxiFit addresses common issues of faint magnetic sensing by choosing appropriate preprocessing, negating adversarial motion artifacts, and designing a lightweight yet noise-tolerant classifier. Also, application-specific challenges such as a wide variety of equipment and the impracticality of obtaining large datasets are overcome by devising a unique yet challenging training policy. We evaluate ProxiFit on up to 10 weight machines (5 lower- and 5 upper-body) and 4 free-weight exercises, on both wearable and signage mode, with 19 users, at 3 gyms, over 14 months, and verify robustness against user and weather variations, spatial and rotational device location deviations, and neighboring machine interference.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}