Romina Carrasco, F. Baker, A. Bukowska, I. Clark, Libby M. Flynn, Kate McMahon, H. Odell-Miller, Karette Stensaeth, J. Tamplin, T. Sousa, Jenny Waycott, T. Wosch
Human-computer interaction researchers have explored how to design technologies to support people with dementia (PwD) and their caregivers, but limited attention has been given to how to facilitate music therapy in dementia care. The use of music to help manage the symptoms of dementia is often guided by a music therapist who adapts the intervention to respond to the changing needs of the person living with dementia. However, as the incidence of dementia increases worldwide, individualised therapy programs are less feasible, making it valuable to consider technology-based approaches. In this paper, we analyze data from case studies of home-based music therapy training interventions with two families. The findings show that embodied interactions supported the therapist in responding to the needs of the PwD and built an empathic environment that empowered the caregivers’ learning. We discuss opportunities and challenges for designing technologies that support family caregivers’ therapy-informed music use in dementia care.
{"title":"Empowering Caregivers of People Living with Dementia to Use Music Therapeutically at Home: Design Opportunities","authors":"Romina Carrasco, F. Baker, A. Bukowska, I. Clark, Libby M. Flynn, Kate McMahon, H. Odell-Miller, Karette Stensaeth, J. Tamplin, T. Sousa, Jenny Waycott, T. Wosch","doi":"10.1145/3441000.3441082","DOIUrl":"https://doi.org/10.1145/3441000.3441082","url":null,"abstract":"Human-computer interaction researchers have explored how to design technologies to support people with dementia (PwD) and their caregivers, but limited attention has been given to how to facilitate music therapy in dementia care. The use of music to help manage the symptoms of dementia is often guided by a music therapist who adapts the intervention to respond to the changing needs of the person living with dementia. However, as the incidence of dementia increases worldwide, individualised therapy programs are less feasible, making it valuable to consider technology-based approaches. In this paper, we analyze data from case studies of home-based music therapy training interventions with two families. The findings show that embodied interactions supported the therapist in responding to the needs of the PwD and built an empathic environment that empowered the caregivers’ learning. We discuss opportunities and challenges for designing technologies that support family caregivers’ therapy-informed music use in dementia care.","PeriodicalId":265398,"journal":{"name":"Proceedings of the 32nd Australian Conference on Human-Computer Interaction","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121029347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Gao, Huidong Bai, M. Billinghurst, R. Lindeman
In this paper, we present a Mixed Reality (MR) remote collaboration system that enables hybrid view sharing from the local worker to the remote expert for real-time remote guidance. The local worker can share either a 2D first-person view, a 360° live view with a fixed viewpoint, or a 3D free view within a point-cloud reconstruction of the local environment. The remote expert can access these views and freely switch between them in Virtual Reality (VR) for better awareness of the local worker’s surroundings. We investigate the remote experts’ behaviours while using the hybrid view interface during typical pick-and-place remote guiding tasks. We found that the remote experts prefer to learn the local physical layout and search for the targets with a global perspective from the 3D free view. The results also showed that the experts chose to use the 360° live view with independent view control rather than the 2D first-person view with high-resolution to control the task procedures and check the local worker’s actions. Our study informs the viewpoint interface design for a MR remote collaboration system in various guiding scenarios.
{"title":"User Behaviour Analysis of Mixed Reality Remote Collaboration with a Hybrid View Interface","authors":"Lei Gao, Huidong Bai, M. Billinghurst, R. Lindeman","doi":"10.1145/3441000.3441038","DOIUrl":"https://doi.org/10.1145/3441000.3441038","url":null,"abstract":"In this paper, we present a Mixed Reality (MR) remote collaboration system that enables hybrid view sharing from the local worker to the remote expert for real-time remote guidance. The local worker can share either a 2D first-person view, a 360° live view with a fixed viewpoint, or a 3D free view within a point-cloud reconstruction of the local environment. The remote expert can access these views and freely switch between them in Virtual Reality (VR) for better awareness of the local worker’s surroundings. We investigate the remote experts’ behaviours while using the hybrid view interface during typical pick-and-place remote guiding tasks. We found that the remote experts prefer to learn the local physical layout and search for the targets with a global perspective from the 3D free view. The results also showed that the experts chose to use the 360° live view with independent view control rather than the 2D first-person view with high-resolution to control the task procedures and check the local worker’s actions. Our study informs the viewpoint interface design for a MR remote collaboration system in various guiding scenarios.","PeriodicalId":265398,"journal":{"name":"Proceedings of the 32nd Australian Conference on Human-Computer Interaction","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132398469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While contemporary game engine tools have allowed a lower barrier of entry into creating 3D environments, many rely on engineering, architectural or cartographic approaches. While useful, this usually facilitates very particular methods of 3D space creation. In this paper, we present our tool for creating 3D spaces through the use of narration and storytelling as an interface. Our platform utilizes voice recognition to allow users to narrate a story or event and have elements of the story appear in 3D space as virtual objects in real-time. Through further voice commands or the use of drag-and-drop controls, users can then reconfigure these objects to better suit their story or create new variations of the story. Our platform is designed for immersive systems such as VR HMDs, 3D-enabled cylindrical screens, stereo walls, and non-immersive ones such as desktops and tablets allowing for accessibility and collaboration. A novel contribution of this paper is this method of building worlds through narration. To provide some insight into its capabilities and the reasoning behind design decisions, an account of constructing the story Lessons in Vinyl within the tool during early development is presented.
{"title":"Creating 3D worlds through storytelling and narration","authors":"Cameron Edmond, D. Branchaud, T. Bednarz","doi":"10.1145/3441000.3441028","DOIUrl":"https://doi.org/10.1145/3441000.3441028","url":null,"abstract":"While contemporary game engine tools have allowed a lower barrier of entry into creating 3D environments, many rely on engineering, architectural or cartographic approaches. While useful, this usually facilitates very particular methods of 3D space creation. In this paper, we present our tool for creating 3D spaces through the use of narration and storytelling as an interface. Our platform utilizes voice recognition to allow users to narrate a story or event and have elements of the story appear in 3D space as virtual objects in real-time. Through further voice commands or the use of drag-and-drop controls, users can then reconfigure these objects to better suit their story or create new variations of the story. Our platform is designed for immersive systems such as VR HMDs, 3D-enabled cylindrical screens, stereo walls, and non-immersive ones such as desktops and tablets allowing for accessibility and collaboration. A novel contribution of this paper is this method of building worlds through narration. To provide some insight into its capabilities and the reasoning behind design decisions, an account of constructing the story Lessons in Vinyl within the tool during early development is presented.","PeriodicalId":265398,"journal":{"name":"Proceedings of the 32nd Australian Conference on Human-Computer Interaction","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116476333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Occupational therapists (OTs) support people with physical impairments and disabilities (PwIDs) to maintain their functional independence by suggesting home modifications that incorporate assistive technologies (ATs). Augmented reality (AR) systems can support home-modification processes by superimposing 3D items (e.g., ATs) onto real environments (e.g., in homes). In this paper, we report on evaluating a tablet-based AR application for use by OTs to facilitate envisioning the most appropriate scenarios for AT purchase, installation, and use in PwIDs homes. We conducted in situ user studies with ten OTs to evaluate the AR tool and identified the following advantages of using AR in home-modification processes: (1) providing visual clues (AT fit, size, function, and appearance) in the home, (2) supporting collaborative home-modification decision-making processes, (3) facilitating a holistic home-modification approach, and (4) involving stakeholders throughout the home-modification processes. Also, we discuss the lessons learned regarding the usability, usefulness, and future iterations of the AR home-modification tool.
{"title":"Occupational Therapy Meets Design: An Augmented Reality Tool for Assistive Home Modifications","authors":"Hiroo Aoyama, L. Aflatoony","doi":"10.1145/3441000.3441016","DOIUrl":"https://doi.org/10.1145/3441000.3441016","url":null,"abstract":"Occupational therapists (OTs) support people with physical impairments and disabilities (PwIDs) to maintain their functional independence by suggesting home modifications that incorporate assistive technologies (ATs). Augmented reality (AR) systems can support home-modification processes by superimposing 3D items (e.g., ATs) onto real environments (e.g., in homes). In this paper, we report on evaluating a tablet-based AR application for use by OTs to facilitate envisioning the most appropriate scenarios for AT purchase, installation, and use in PwIDs homes. We conducted in situ user studies with ten OTs to evaluate the AR tool and identified the following advantages of using AR in home-modification processes: (1) providing visual clues (AT fit, size, function, and appearance) in the home, (2) supporting collaborative home-modification decision-making processes, (3) facilitating a holistic home-modification approach, and (4) involving stakeholders throughout the home-modification processes. Also, we discuss the lessons learned regarding the usability, usefulness, and future iterations of the AR home-modification tool.","PeriodicalId":265398,"journal":{"name":"Proceedings of the 32nd Australian Conference on Human-Computer Interaction","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114201128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhanna Sarsenbayeva, Benjamin Tag, Shue Yan, V. Kostakos, Jorge Gonçalves
In this paper we investigate the practice of using video games for digital emotion regulation. In a two-week diary study we collected participants’ records on their emotional states and video games experience. Our findings show that people use video games to respond to their reality, to socialise and communicate with their friends. We also show that video games lead to intentional and unintentional emotional outcomes depending on different aspects of the game. Our work provides insights on using video games for digital emotion regulation and directions for future research.
{"title":"Using Video Games to Regulate Emotions","authors":"Zhanna Sarsenbayeva, Benjamin Tag, Shue Yan, V. Kostakos, Jorge Gonçalves","doi":"10.1145/3441000.3441035","DOIUrl":"https://doi.org/10.1145/3441000.3441035","url":null,"abstract":"In this paper we investigate the practice of using video games for digital emotion regulation. In a two-week diary study we collected participants’ records on their emotional states and video games experience. Our findings show that people use video games to respond to their reality, to socialise and communicate with their friends. We also show that video games lead to intentional and unintentional emotional outcomes depending on different aspects of the game. Our work provides insights on using video games for digital emotion regulation and directions for future research.","PeriodicalId":265398,"journal":{"name":"Proceedings of the 32nd Australian Conference on Human-Computer Interaction","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122068780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tina Øvad, Kashmiri Stec, L. B. Larsen, Lucca Julie Nellemann, Jedrzej Czapla
This work addresses the needs companies face when assessing and tracking the UX quality of their products, which is necessary to ensure that the desired level of quality is met, clarify potential areas of improvement, and compare competitor products via benchmarking. We present a framework for UX KPI assessment at Bang & Olufsen, a luxury audio product manufacturer. The framework is inspired by well-known UX scales such as the UEQ and AttrakDiff as well as more business-oriented measures such as NPS and CES scores. The resulting UX KPI framework comprises the test procedures and a scale containing these and ten unique features such as "Enjoyable”, “Exciting”, “Sound experience". The paper presents and discusses the UX KPI framework and the rationales behind it. Results of more than 200 user tests are presented and discussed. Factor analysis has been employed to analyse the assumptions behind the UX KPI and identify where improvements can be made.
{"title":"Development of a framework for UX KPIs in Industry - a case study: ABSTRACT","authors":"Tina Øvad, Kashmiri Stec, L. B. Larsen, Lucca Julie Nellemann, Jedrzej Czapla","doi":"10.1145/3441000.3441042","DOIUrl":"https://doi.org/10.1145/3441000.3441042","url":null,"abstract":"This work addresses the needs companies face when assessing and tracking the UX quality of their products, which is necessary to ensure that the desired level of quality is met, clarify potential areas of improvement, and compare competitor products via benchmarking. We present a framework for UX KPI assessment at Bang & Olufsen, a luxury audio product manufacturer. The framework is inspired by well-known UX scales such as the UEQ and AttrakDiff as well as more business-oriented measures such as NPS and CES scores. The resulting UX KPI framework comprises the test procedures and a scale containing these and ten unique features such as \"Enjoyable”, “Exciting”, “Sound experience\". The paper presents and discusses the UX KPI framework and the rationales behind it. Results of more than 200 user tests are presented and discussed. Factor analysis has been employed to analyse the assumptions behind the UX KPI and identify where improvements can be made.","PeriodicalId":265398,"journal":{"name":"Proceedings of the 32nd Australian Conference on Human-Computer Interaction","volume":"89 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126317563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eye gaze information is an important and effective input modality for human-human communication. As eye tracker is becoming more affordable for remote collaboration, a systematic literature review about eye-tracking supported collaboration systems on physical works will improve the collaboration and benefit system design in the future. Toward this end, we review publications since year 2000 and categorize the reported prototypes and systems with respect to eye gaze functionality, eye-tracked subjects, physical task types and gaze visualisations, demonstrating an overview of the usage and effect of eye tracking in remote guidance over time. We identify the usage of eye tracking in remote guidance as gaze behavior and intention indicator, fast and effective input modality, physical referential pointer and communication cues for social presence. Finally we analyze and discuss the effect and limitation of eye tracking in remote guidance with respect to different gaze visualisations in a variety of applied scenarios. The technical and social challenges identified will improve collaborations under remote guidance and benefit eye-tracking supported collaboration system design in the future.
{"title":"Usage and Effect of Eye Tracking in Remote Guidance","authors":"Chun Xiao, Weidong Huang, M. Billinghurst","doi":"10.1145/3441000.3441051","DOIUrl":"https://doi.org/10.1145/3441000.3441051","url":null,"abstract":"Eye gaze information is an important and effective input modality for human-human communication. As eye tracker is becoming more affordable for remote collaboration, a systematic literature review about eye-tracking supported collaboration systems on physical works will improve the collaboration and benefit system design in the future. Toward this end, we review publications since year 2000 and categorize the reported prototypes and systems with respect to eye gaze functionality, eye-tracked subjects, physical task types and gaze visualisations, demonstrating an overview of the usage and effect of eye tracking in remote guidance over time. We identify the usage of eye tracking in remote guidance as gaze behavior and intention indicator, fast and effective input modality, physical referential pointer and communication cues for social presence. Finally we analyze and discuss the effect and limitation of eye tracking in remote guidance with respect to different gaze visualisations in a variety of applied scenarios. The technical and social challenges identified will improve collaborations under remote guidance and benefit eye-tracking supported collaboration system design in the future.","PeriodicalId":265398,"journal":{"name":"Proceedings of the 32nd Australian Conference on Human-Computer Interaction","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122792219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anders Høgh Hansen, Rikke Hagensby Jensen, Lasse Stausgaard Jensen, Emil Kongsgaard Guldager, Andreas Winkel Sigsgaard, Frederik Moroder, D. Raptis, Laurynas Siksnys, T. Pedersen, M. Skov
Within sustainable HCI research, we see a growing interest to study how designing interactive technology can improve the utilisation of renewable energy resources. In this case study, we explore the concept of energy communities and how technology can be designed to support people to cooperate around transitioning to a more sustainable use of electricity. To do so, we designed the Lumen prototype, which aims to support a small energy community in shifting domestic energy-consuming practices to align with times of high availability of sustainable energy. By creating awareness of current and future sustainable energy availability through an ambient feedback display, the Lumen prototype informs households about the community’s consumption patterns. To obtain insights into how people understand and experience an energy community, we conducted a qualitative field study with three Danish households. Through our study, we found sustainable awareness and incentives materialised in the ambient display were amplified by the dynamics of the community. We conclude by discussing future directions for exploring how to design technology for energy communities.
{"title":"Lumen: A Case Study of Designing for Sustainable Energy Communities through Ambient Feedback","authors":"Anders Høgh Hansen, Rikke Hagensby Jensen, Lasse Stausgaard Jensen, Emil Kongsgaard Guldager, Andreas Winkel Sigsgaard, Frederik Moroder, D. Raptis, Laurynas Siksnys, T. Pedersen, M. Skov","doi":"10.1145/3441000.3441001","DOIUrl":"https://doi.org/10.1145/3441000.3441001","url":null,"abstract":"Within sustainable HCI research, we see a growing interest to study how designing interactive technology can improve the utilisation of renewable energy resources. In this case study, we explore the concept of energy communities and how technology can be designed to support people to cooperate around transitioning to a more sustainable use of electricity. To do so, we designed the Lumen prototype, which aims to support a small energy community in shifting domestic energy-consuming practices to align with times of high availability of sustainable energy. By creating awareness of current and future sustainable energy availability through an ambient feedback display, the Lumen prototype informs households about the community’s consumption patterns. To obtain insights into how people understand and experience an energy community, we conducted a qualitative field study with three Danish households. Through our study, we found sustainable awareness and incentives materialised in the ambient display were amplified by the dynamics of the community. We conclude by discussing future directions for exploring how to design technology for energy communities.","PeriodicalId":265398,"journal":{"name":"Proceedings of the 32nd Australian Conference on Human-Computer Interaction","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124096264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online dating systems are widely used to meet romantic partners, yet people often struggle to write attractive profiles on these applications. Artificial intelligence (AI) has the potential to help online daters by automatically generating profile content, but little research has explored how the use of AI in online dating could affect users’ perceptions of one another. The present study investigated how the perceived involvement of AI influences ratings of attractiveness and trust in online dating. In a between-subjects experiment, participants (N = 48) were presented with the text of 10 dating profiles and were told that the profiles had been written by humans or with the help of AI. We found that the perceived involvement of AI did not have a significant impact on attractiveness, but that it did lead to a significant reduction in trustworthiness of the profile author. We interpret our findings through the lens of social information processing theory, discussing the tradeoffs associated with designing to reveal or hide the use of AI in online dating.
{"title":"Online Dating Meets Artificial Intelligence: How the Perception of Algorithmically Generated Profile Text Impacts Attractiveness and Trust","authors":"Yihan Wu, Ryan M. Kelly","doi":"10.1145/3441000.3441074","DOIUrl":"https://doi.org/10.1145/3441000.3441074","url":null,"abstract":"Online dating systems are widely used to meet romantic partners, yet people often struggle to write attractive profiles on these applications. Artificial intelligence (AI) has the potential to help online daters by automatically generating profile content, but little research has explored how the use of AI in online dating could affect users’ perceptions of one another. The present study investigated how the perceived involvement of AI influences ratings of attractiveness and trust in online dating. In a between-subjects experiment, participants (N = 48) were presented with the text of 10 dating profiles and were told that the profiles had been written by humans or with the help of AI. We found that the perceived involvement of AI did not have a significant impact on attractiveness, but that it did lead to a significant reduction in trustworthiness of the profile author. We interpret our findings through the lens of social information processing theory, discussing the tradeoffs associated with designing to reveal or hide the use of AI in online dating.","PeriodicalId":265398,"journal":{"name":"Proceedings of the 32nd Australian Conference on Human-Computer Interaction","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127701063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research studies exploring movement-based interactions have typically required access to physical space for workshops with multiple participants. As a result of current Covid-19 social distancing requirements, the move from live instruction and investigation to remote workshops poses both creative and technical challenges. This requires a rethink of what, and how, data is collected. This paper considers these issues applied to an early-stage research project in human–object based movement. The project integrates theatre and dance with computer vision tools, to create new, live theatrical experiences. The first case study focuses on human–object movement using improvised theatre methods, before the computer vision component is introduced to the participants. Issues facing remote workshops include restructuring to accommodate improvisation, devised theatre, and creative collaboration. Physical space and safety, camera issues, spatial awareness, impact on improvisational practices, data limitations and quality are also challenges for the project. Input is invited from the HCI community on the issues raised and the feasibility of the proposed methods, as we prepare to run our study.
{"title":"Challenges facing movement research in the time of Covid-19: Issues in redesigning workshops for remote participation and data collection","authors":"J. Martin, L. Loke, Kazjon Grace","doi":"10.1145/3441000.3441055","DOIUrl":"https://doi.org/10.1145/3441000.3441055","url":null,"abstract":"Research studies exploring movement-based interactions have typically required access to physical space for workshops with multiple participants. As a result of current Covid-19 social distancing requirements, the move from live instruction and investigation to remote workshops poses both creative and technical challenges. This requires a rethink of what, and how, data is collected. This paper considers these issues applied to an early-stage research project in human–object based movement. The project integrates theatre and dance with computer vision tools, to create new, live theatrical experiences. The first case study focuses on human–object movement using improvised theatre methods, before the computer vision component is introduced to the participants. Issues facing remote workshops include restructuring to accommodate improvisation, devised theatre, and creative collaboration. Physical space and safety, camera issues, spatial awareness, impact on improvisational practices, data limitations and quality are also challenges for the project. Input is invited from the HCI community on the issues raised and the feasibility of the proposed methods, as we prepare to run our study.","PeriodicalId":265398,"journal":{"name":"Proceedings of the 32nd Australian Conference on Human-Computer Interaction","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133294620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}