There is a developing recognition of the social and economic costs entailed in global supply chains. In this paper, we report on efforts to provide alternative, more sustainable and resilient models of production. Community Supported Agricultures (CSAs) address this problem but require new means of exchange which, we suggest, offer a design opportunity for sustainable HCI research. This paper presents a two months participatory observation in a food movement, a German CSA which developed a distribution system involving their own currency. Based on our ethnographic observations, we focus our discussion on (1) the solidaristic principles upon which the movement is based and (2) techniques of mediating between consumers’ wishes and the constraints of local agricultural production. By relating to the continued development of CSAs, we identify three interrelated innovation gaps and discuss new software architectures aimed at resolving the problems which arise as the movement grows.
{"title":"Community Supported Agriculture: The Concept of Solidarity in Mitigating Between Harvests and Needs","authors":"M. Landwehr, Philip Engelbutzeder, V. Wulf","doi":"10.1145/3411764.3445268","DOIUrl":"https://doi.org/10.1145/3411764.3445268","url":null,"abstract":"There is a developing recognition of the social and economic costs entailed in global supply chains. In this paper, we report on efforts to provide alternative, more sustainable and resilient models of production. Community Supported Agricultures (CSAs) address this problem but require new means of exchange which, we suggest, offer a design opportunity for sustainable HCI research. This paper presents a two months participatory observation in a food movement, a German CSA which developed a distribution system involving their own currency. Based on our ethnographic observations, we focus our discussion on (1) the solidaristic principles upon which the movement is based and (2) techniques of mediating between consumers’ wishes and the constraints of local agricultural production. By relating to the continued development of CSAs, we identify three interrelated innovation gaps and discuss new software architectures aimed at resolving the problems which arise as the movement grows.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"97 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78247018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ana M. Villanueva, Ziyi Liu, Zhengzhe Zhu, Xin Du, Joey Huang, K. Peppler, K. Ramani
Distance learning is facing a critical moment finding a balance between high quality education for remote students and engaging them in hands-on learning. This is particularly relevant for project-based classrooms and makerspaces, which typically require extensive trouble-shooting and example demonstrations from instructors. We present RobotAR, a teleconsulting robotics toolkit for creating Augmented Reality (AR) makerspaces. We present the hardware and software for an AR-compatible robot, which behaves as a student’s voice assistant and can be embodied by the instructor for teleconsultation. As a desktop-based teleconsulting agent, the instructor has control of the robot’s joints and position to better focus on areas of interest inside the workspace. Similarly, the instructor has access to the student’s virtual environment and the capability to create AR content to aid the student with problem-solving. We also performed a user study which compares current techniques for distance hands-on learning and an implementation of our toolkit.
{"title":"RobotAR: An Augmented Reality Compatible Teleconsulting Robotics Toolkit for Augmented Makerspace Experiences","authors":"Ana M. Villanueva, Ziyi Liu, Zhengzhe Zhu, Xin Du, Joey Huang, K. Peppler, K. Ramani","doi":"10.1145/3411764.3445726","DOIUrl":"https://doi.org/10.1145/3411764.3445726","url":null,"abstract":"Distance learning is facing a critical moment finding a balance between high quality education for remote students and engaging them in hands-on learning. This is particularly relevant for project-based classrooms and makerspaces, which typically require extensive trouble-shooting and example demonstrations from instructors. We present RobotAR, a teleconsulting robotics toolkit for creating Augmented Reality (AR) makerspaces. We present the hardware and software for an AR-compatible robot, which behaves as a student’s voice assistant and can be embodied by the instructor for teleconsultation. As a desktop-based teleconsulting agent, the instructor has control of the robot’s joints and position to better focus on areas of interest inside the workspace. Similarly, the instructor has access to the student’s virtual environment and the capability to create AR content to aid the student with problem-solving. We also performed a user study which compares current techniques for distance hands-on learning and an implementation of our toolkit.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75247534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The centrality of the biometric point of sale (POS) machine in the administration of food security in Indian's public distribution system (PDS) invites scrutiny for its primacy as a non-negotiable artifact in the monthly PDS process. In this paper, I critically examine how the POS machine emerges as a site for varying imaginaries of a technologically-mediated welfare system for the three primary stakeholders of the PDS, consisting of the beneficiaries, dealers, and state administrators. Drawing on ethnographic fieldwork, the paper traces the histories of interaction and portraitures that the three stakeholders bring to their description and interpretation of the POS machine. It shows that an active POS machine provokes the stakeholders in the PDS to view it as an artifact that invites engagement on practical, moral, and knowledge dimensions. The varying ‘biographies’ that stakeholders narrate of the POS machine, collectively reveal the design, disposition, and functioning of a social justice infrastructure that rests on the compulsions of biometric technologies to improve inclusion and deter corruption in welfare delivery.
{"title":"Biographies of Biometric Devices: The POS Machine at Work in India's PDS","authors":"P. Mudliar","doi":"10.1145/3411764.3445553","DOIUrl":"https://doi.org/10.1145/3411764.3445553","url":null,"abstract":"The centrality of the biometric point of sale (POS) machine in the administration of food security in Indian's public distribution system (PDS) invites scrutiny for its primacy as a non-negotiable artifact in the monthly PDS process. In this paper, I critically examine how the POS machine emerges as a site for varying imaginaries of a technologically-mediated welfare system for the three primary stakeholders of the PDS, consisting of the beneficiaries, dealers, and state administrators. Drawing on ethnographic fieldwork, the paper traces the histories of interaction and portraitures that the three stakeholders bring to their description and interpretation of the POS machine. It shows that an active POS machine provokes the stakeholders in the PDS to view it as an artifact that invites engagement on practical, moral, and knowledge dimensions. The varying ‘biographies’ that stakeholders narrate of the POS machine, collectively reveal the design, disposition, and functioning of a social justice infrastructure that rests on the compulsions of biometric technologies to improve inclusion and deter corruption in welfare delivery.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77904110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paweł W. Woźniak, Monika Zbytniewska, Francisco Kiss, Jasmin Niess
Running is a widely popular physical activity that offers many health benefits. As runners progress with their training, understanding one’s own body becomes a key concern in achieving wellbeing through running. While extensive bodily sensing opportunities exist for runners, understanding complex sensor data is a challenge. In this paper, we investigate how data from shoe-worn sensors can be visualised to empower runners to improve their technique. We designed GraFeet—an augmented running shoe that visualises kinesiological data about the runner’s feet and gait. We compared our prototype with a standard sensor dashboard in a user study where users ran with the sensor and analysed the generated data after the run. GraFeet was perceived as more usable; producing more insights and less confusion in the users. Based on our inquiry, we contribute findings about using data from body-worn sensors to support physically active individuals.
{"title":"Making Sense of Complex Running Metrics Using a Modified Running Shoe","authors":"Paweł W. Woźniak, Monika Zbytniewska, Francisco Kiss, Jasmin Niess","doi":"10.1145/3411764.3445506","DOIUrl":"https://doi.org/10.1145/3411764.3445506","url":null,"abstract":"Running is a widely popular physical activity that offers many health benefits. As runners progress with their training, understanding one’s own body becomes a key concern in achieving wellbeing through running. While extensive bodily sensing opportunities exist for runners, understanding complex sensor data is a challenge. In this paper, we investigate how data from shoe-worn sensors can be visualised to empower runners to improve their technique. We designed GraFeet—an augmented running shoe that visualises kinesiological data about the runner’s feet and gait. We compared our prototype with a standard sensor dashboard in a user study where users ran with the sensor and analysed the generated data after the run. GraFeet was perceived as more usable; producing more insights and less confusion in the users. Based on our inquiry, we contribute findings about using data from body-worn sensors to support physically active individuals.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78503470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yelim Kim, Mohi Reza, Joanna McGrenere, Dongwook Yoon
This work investigates the practices and challenges of voice user interface (VUI) designers. Existing VUI design guidelines recommend that designers strive for natural human-agent conversation. However, the literature leaves a critical gap regarding how designers pursue naturalness in VUIs and what their struggles are in doing so. Bridging this gap is necessary for identifying designers’ needs and supporting them. Our interviews with 20 VUI designers identified 12 ways that designers characterize and approach naturalness in VUIs. We categorized these characteristics into three groupings based on the types of conversational context that each characteristic contributes to: Social, Transactional, and Core. Our results contribute new findings on designers’ challenges, such as a design dilemma in augmenting task-oriented VUIs with social conversations, difficulties in writing for spoken language, lack of proper tool support for imbuing synthesized voice with expressivity, and implications for developing design tools and guidelines.
{"title":"Designers Characterize Naturalness in Voice User Interfaces: Their Goals, Practices, and Challenges","authors":"Yelim Kim, Mohi Reza, Joanna McGrenere, Dongwook Yoon","doi":"10.1145/3411764.3445579","DOIUrl":"https://doi.org/10.1145/3411764.3445579","url":null,"abstract":"This work investigates the practices and challenges of voice user interface (VUI) designers. Existing VUI design guidelines recommend that designers strive for natural human-agent conversation. However, the literature leaves a critical gap regarding how designers pursue naturalness in VUIs and what their struggles are in doing so. Bridging this gap is necessary for identifying designers’ needs and supporting them. Our interviews with 20 VUI designers identified 12 ways that designers characterize and approach naturalness in VUIs. We categorized these characteristics into three groupings based on the types of conversational context that each characteristic contributes to: Social, Transactional, and Core. Our results contribute new findings on designers’ challenges, such as a design dilemma in augmenting task-oriented VUIs with social conversations, difficulties in writing for spoken language, lack of proper tool support for imbuing synthesized voice with expressivity, and implications for developing design tools and guidelines.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"140 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76065571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interactive machine learning (iML) tools help to make ML accessible to users with limited ML expertise. However, gathering necessary training data and expertise for model-building remains challenging. Transfer learning, a process where learned representations from a model trained on potentially terabytes of data can be transferred to a new, related task, offers the possibility of providing ”building blocks” for non-expert users to quickly and effectively apply ML in their work. However, transfer learning largely remains an expert tool due to its high complexity. In this paper, we design a prototype to understand non-expert user behavior in an interactive environment that supports transfer learning. Our findings reveal a series of data- and perception-driven decision-making strategies non-expert users employ, to (in)effectively transfer elements using their domain expertise. Finally, we synthesize design implications which might inform future interactive transfer learning environments.
{"title":"Designing Interactive Transfer Learning Tools for ML Non-Experts","authors":"Swati Mishra, Jeffrey M. Rzeszotarski","doi":"10.1145/3411764.3445096","DOIUrl":"https://doi.org/10.1145/3411764.3445096","url":null,"abstract":"Interactive machine learning (iML) tools help to make ML accessible to users with limited ML expertise. However, gathering necessary training data and expertise for model-building remains challenging. Transfer learning, a process where learned representations from a model trained on potentially terabytes of data can be transferred to a new, related task, offers the possibility of providing ”building blocks” for non-expert users to quickly and effectively apply ML in their work. However, transfer learning largely remains an expert tool due to its high complexity. In this paper, we design a prototype to understand non-expert user behavior in an interactive environment that supports transfer learning. Our findings reveal a series of data- and perception-driven decision-making strategies non-expert users employ, to (in)effectively transfer elements using their domain expertise. Finally, we synthesize design implications which might inform future interactive transfer learning environments.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"124 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76265146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jessica A. Pater, Fayika Farhat Nova, Amanda Coupe, L. Reining, Connie Kerrigan, Tammy R Toscos, Elizabeth D. Mynatt
A growing body of research in HCI focuses on understanding how social media and other social technologies impact a given user’s mental health, including eating disorders. In this paper, we review the results of an interview study with 10 clinicians spanning various specialties who treat people with eating disorders, in order to understand the clinical contexts of eating disorders and social media use. We found various tensions related to clinician comfort and education into the (mis)use of technologies and balancing the positive and negative aspects of social media use within active disease states as well as in recovery. Understanding these tensions as well as the variation in the current process of diagnosing patients is a critical component in connecting HCI research focused on eating disorders to clinical practice and ultimately assessing how digital self-harm could be addressed clinically in the future.
{"title":"Charting the Unknown: Challenges in the Clinical Assessment of Patients’ Technology Use Related to Eating Disorders","authors":"Jessica A. Pater, Fayika Farhat Nova, Amanda Coupe, L. Reining, Connie Kerrigan, Tammy R Toscos, Elizabeth D. Mynatt","doi":"10.1145/3411764.3445289","DOIUrl":"https://doi.org/10.1145/3411764.3445289","url":null,"abstract":"A growing body of research in HCI focuses on understanding how social media and other social technologies impact a given user’s mental health, including eating disorders. In this paper, we review the results of an interview study with 10 clinicians spanning various specialties who treat people with eating disorders, in order to understand the clinical contexts of eating disorders and social media use. We found various tensions related to clinician comfort and education into the (mis)use of technologies and balancing the positive and negative aspects of social media use within active disease states as well as in recovery. Understanding these tensions as well as the variation in the current process of diagnosing patients is a critical component in connecting HCI research focused on eating disorders to clinical practice and ultimately assessing how digital self-harm could be addressed clinically in the future.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75624270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Novice tangible interaction design students often find it challenging to generate input action ideas for tangible interfaces. To identify opportunities to aid input action idea generation, we built and evaluated a tool consisting of interactive physical artifacts coupled with digital examples of tangible systems and technical implementation guidance. Through video recorded design sessions and interviews with twelve students, we investigated how they used the tool to generate input action ideas, how it supported them, and what challenges they faced. We found that the tool helped in generating input action ideas by enabling to experience input actions, supporting hands-on explorations, and introducing possibilities. However, introducing examples at times caused design fixation. The tool fell short in supporting the planning of technical implementation of the generated ideas. This research is useful for tangible interaction design students, instructors, and researchers to apply in education, design similar tools, or conduct further research.
{"title":"Exploring Opportunities to Aid Generation of Input Action Ideas for Tangible User Interfaces","authors":"Uddipana Baishya, A. Antle, Carman Neustaedter","doi":"10.1145/3411764.3445713","DOIUrl":"https://doi.org/10.1145/3411764.3445713","url":null,"abstract":"Novice tangible interaction design students often find it challenging to generate input action ideas for tangible interfaces. To identify opportunities to aid input action idea generation, we built and evaluated a tool consisting of interactive physical artifacts coupled with digital examples of tangible systems and technical implementation guidance. Through video recorded design sessions and interviews with twelve students, we investigated how they used the tool to generate input action ideas, how it supported them, and what challenges they faced. We found that the tool helped in generating input action ideas by enabling to experience input actions, supporting hands-on explorations, and introducing possibilities. However, introducing examples at times caused design fixation. The tool fell short in supporting the planning of technical implementation of the generated ideas. This research is useful for tangible interaction design students, instructors, and researchers to apply in education, design similar tools, or conduct further research.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73149088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Massimiliano Di Luca, H. Seifi, Simon Egan, Mar González-Franco
Numerous techniques have been proposed for locomotion in virtual reality (VR). Several taxonomies consider a large number of attributes (e.g., hardware, accessibility) to characterize these techniques. However, finding the appropriate locomotion technique (LT) and identifying gaps for future designs in the high-dimensional space of attributes can be quite challenging. To aid analysis and innovation, we devised Locomotion Vault (https://locomotionvault.github.io/), a database and visualization of over 100 LTs from academia and industry. We propose similarity between LTs as a metric to aid navigation and visualization. We show that similarity based on attribute values correlates with expert similarity assessments (a method that does not scale). Our analysis also highlights an inherent trade-off between simulation sickness and accessibility across LTs. As such, Locomotion Vault shows to be a tool that unifies information on LTs and enables their standardization and large-scale comparison to help understand the space of possibilities in VR locomotion.
{"title":"Locomotion Vault: the Extra Mile in Analyzing VR Locomotion Techniques","authors":"Massimiliano Di Luca, H. Seifi, Simon Egan, Mar González-Franco","doi":"10.1145/3411764.3445319","DOIUrl":"https://doi.org/10.1145/3411764.3445319","url":null,"abstract":"Numerous techniques have been proposed for locomotion in virtual reality (VR). Several taxonomies consider a large number of attributes (e.g., hardware, accessibility) to characterize these techniques. However, finding the appropriate locomotion technique (LT) and identifying gaps for future designs in the high-dimensional space of attributes can be quite challenging. To aid analysis and innovation, we devised Locomotion Vault (https://locomotionvault.github.io/), a database and visualization of over 100 LTs from academia and industry. We propose similarity between LTs as a metric to aid navigation and visualization. We show that similarity based on attribute values correlates with expert similarity assessments (a method that does not scale). Our analysis also highlights an inherent trade-off between simulation sickness and accessibility across LTs. As such, Locomotion Vault shows to be a tool that unifies information on LTs and enables their standardization and large-scale comparison to help understand the space of possibilities in VR locomotion.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"230 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73245949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paul Schlosser, Ben Matthews, Isaac S Salisbury, P. Sanderson, Sass Hayes
Head-worn displays (HWDs) offer their users high mobility, hands-free operation, and “see-what-I-see” features. In the prehospital environment, emergency medical services (EMS) staff could benefit from the unique characteristics of HWDs. We conducted a field study to analyze work practices of EMS staff and the potential of HWDs to support their activities. Based on our observations and the comments of EMS staff, we propose three use cases for HWDs in the prehospital environment. They are (1) enhanced communication between different care providers, (2) hands-free access to clinical monitoring and imaging, (3) and improved realism of training scenarios. We conclude with a set of design considerations and suggest that for the successful implementation of HWDs in EMS environments, researchers, designers, and clinical stakeholders should consider the harsh outdoor environment in which HWDs will be used, the extensive workload of staff, the complex collaboration performed, privacy requirements, and the high variability of work.
{"title":"Head-Worn Displays for Emergency Medical Services Staff: Properties of Prehospital Work, Use Cases, and Design Considerations","authors":"Paul Schlosser, Ben Matthews, Isaac S Salisbury, P. Sanderson, Sass Hayes","doi":"10.1145/3411764.3445614","DOIUrl":"https://doi.org/10.1145/3411764.3445614","url":null,"abstract":"Head-worn displays (HWDs) offer their users high mobility, hands-free operation, and “see-what-I-see” features. In the prehospital environment, emergency medical services (EMS) staff could benefit from the unique characteristics of HWDs. We conducted a field study to analyze work practices of EMS staff and the potential of HWDs to support their activities. Based on our observations and the comments of EMS staff, we propose three use cases for HWDs in the prehospital environment. They are (1) enhanced communication between different care providers, (2) hands-free access to clinical monitoring and imaging, (3) and improved realism of training scenarios. We conclude with a set of design considerations and suggest that for the successful implementation of HWDs in EMS environments, researchers, designers, and clinical stakeholders should consider the harsh outdoor environment in which HWDs will be used, the extensive workload of staff, the complex collaboration performed, privacy requirements, and the high variability of work.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72805427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}