Music and movies are powerful art forms that elicit deep feelings and emotions and help us reflect on our and other people's lives on subjects such as: dreams, mental states, routines, society and culture. The evolution of technology has been easing access to these forms of entertainment and education for everyone, everywhere in the world. Given the easy and frequent interaction with a huge amount of movies and music daily, and the impact of these in our emotions, it becomes more and more relevant to address and think of ways to augment people's emotional perception and awareness of multimedia content in and through movies and music. In this paper, we present the motivation and describe the background for these challenges, and propose an approach for the design, development and expansion of interactive features that allow users to visualize and access emotions felt while engaging with movies and music. A special focus is put on the content in these forms of entertainment that have in some way meant something or can be associated with a significant memory, providing insights and helping to manage and regulate emotions, allowing to revisit content with an increased awareness or even recommend new content, taking into account users’ personality.
{"title":"Enhancing Emotional Awareness and Regulation in Movies and Music Based on Personality","authors":"D. Pereira, T. Chambel","doi":"10.1145/3573381.3596462","DOIUrl":"https://doi.org/10.1145/3573381.3596462","url":null,"abstract":"Music and movies are powerful art forms that elicit deep feelings and emotions and help us reflect on our and other people's lives on subjects such as: dreams, mental states, routines, society and culture. The evolution of technology has been easing access to these forms of entertainment and education for everyone, everywhere in the world. Given the easy and frequent interaction with a huge amount of movies and music daily, and the impact of these in our emotions, it becomes more and more relevant to address and think of ways to augment people's emotional perception and awareness of multimedia content in and through movies and music. In this paper, we present the motivation and describe the background for these challenges, and propose an approach for the design, development and expansion of interactive features that allow users to visualize and access emotions felt while engaging with movies and music. A special focus is put on the content in these forms of entertainment that have in some way meant something or can be associated with a significant memory, providing insights and helping to manage and regulate emotions, allowing to revisit content with an increased awareness or even recommend new content, taking into account users’ personality.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129466751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the use of Virtual Reality (VR) increases, museums have been using it to create simulations of their artefact collections. However, the level of accessibility, inclusiveness, and engagement of these simulations with senior citizens has been understudied. To address the problem, this case study presents the design of the “Pop-up VR Museum”, a VR experience based on cultural heritage artefacts from the Design Museum in Helsinki that attempts to engage with audiences of wide age ranges. Users can interact with virtual artefacts and listen to stories contributed by different communities. The Pop-up VR Museum has been tested with 254 users at the museum and taken to several elderly care homes. Evaluation is based on users’ gameplay data and their responses to post-experience questionnaires. Results indicate some variation in types of engagement based on users’ age groups. Despite potential limitations, this study provides valuable insights for other museums to create inclusive VR experiences.
{"title":"Enhancing Engagement through Digital Cultural Heritage: A Case Study about Senior Citizens using a Virtual Reality Museum","authors":"G. Vishwanath","doi":"10.1145/3573381.3596154","DOIUrl":"https://doi.org/10.1145/3573381.3596154","url":null,"abstract":"As the use of Virtual Reality (VR) increases, museums have been using it to create simulations of their artefact collections. However, the level of accessibility, inclusiveness, and engagement of these simulations with senior citizens has been understudied. To address the problem, this case study presents the design of the “Pop-up VR Museum”, a VR experience based on cultural heritage artefacts from the Design Museum in Helsinki that attempts to engage with audiences of wide age ranges. Users can interact with virtual artefacts and listen to stories contributed by different communities. The Pop-up VR Museum has been tested with 254 users at the museum and taken to several elderly care homes. Evaluation is based on users’ gameplay data and their responses to post-experience questionnaires. Results indicate some variation in types of engagement based on users’ age groups. Despite potential limitations, this study provides valuable insights for other museums to create inclusive VR experiences.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132080440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual reality (VR) offers opportunities in human-computer interaction research, to embody users in immersive environments and observe how they interact with 3D scenarios under well-controlled environments. VR content has stronger influences on users physical and emotional states as compared to traditional 2D media, however, a fuller understanding of this kind of embodied interaction is currently limited by the extent to which attention and behaviour can be observed in a VR environment, and the accuracy at which these observations can be interpreted as, and mapped to, real-world interactions and intentions. This thesis aims at the creation of a system to help designers in the understanding of the embodied user experience in VR environment: how they feel, what is their intentions when interacting with a certain object, provide them guidance based on their needs and attention. Controlled guided scenarios will help to reduce the gap of perception between the designer building an experience, and the user living it, leading to more efficient behaviour analysis in VR systems.
{"title":"Analysing and Understanding Embodied Interactions in Virtual Reality Systems","authors":"Florent Robert","doi":"10.1145/3573381.3597234","DOIUrl":"https://doi.org/10.1145/3573381.3597234","url":null,"abstract":"Virtual reality (VR) offers opportunities in human-computer interaction research, to embody users in immersive environments and observe how they interact with 3D scenarios under well-controlled environments. VR content has stronger influences on users physical and emotional states as compared to traditional 2D media, however, a fuller understanding of this kind of embodied interaction is currently limited by the extent to which attention and behaviour can be observed in a VR environment, and the accuracy at which these observations can be interpreted as, and mapped to, real-world interactions and intentions. This thesis aims at the creation of a system to help designers in the understanding of the embodied user experience in VR environment: how they feel, what is their intentions when interacting with a certain object, provide them guidance based on their needs and attention. Controlled guided scenarios will help to reduce the gap of perception between the designer building an experience, and the user living it, leading to more efficient behaviour analysis in VR systems.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129345610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatically generating character motion is one of the technologies required for virtual reality, graphics, and robotics. Motion synthesis with deep learning is an emerging research topic. A key component of the development of such an algorithm involves the design of a proper objective metric to evaluate the quality and diversity of the synthesized motion dataset, two key factors of the performance of generative models. The Fréchet distance is nowadays a common method to assess this performance. In the motion generation field, the validation of such evaluation methods relies on the computation of the Fréchet distance between embeddings of the ground truth dataset and motion samples polluted by synthetic noise to mimic the artifacts produced by generative algorithms. However, the synthetic noise degradation does not fully represent motion perturbations that are commonly perceived. One of these artifacts is foot skating: the unnatural foot slides on the ground during locomotion. In this work-in-progress paper, we tested how well the Fréchet Motion Distance (FMD), which was proposed in previous works, is able to measure foot skating artifacts, and we found that FMD is not able to measure efficiently the intensity of the skating degradation.
{"title":"Validating Objective Evaluation Metric: Is Fréchet Motion Distance able to Capture Foot Skating Artifacts ?","authors":"Antoine Maiorca, Youngwoo Yoon, T. Dutoit","doi":"10.1145/3573381.3596460","DOIUrl":"https://doi.org/10.1145/3573381.3596460","url":null,"abstract":"Automatically generating character motion is one of the technologies required for virtual reality, graphics, and robotics. Motion synthesis with deep learning is an emerging research topic. A key component of the development of such an algorithm involves the design of a proper objective metric to evaluate the quality and diversity of the synthesized motion dataset, two key factors of the performance of generative models. The Fréchet distance is nowadays a common method to assess this performance. In the motion generation field, the validation of such evaluation methods relies on the computation of the Fréchet distance between embeddings of the ground truth dataset and motion samples polluted by synthetic noise to mimic the artifacts produced by generative algorithms. However, the synthetic noise degradation does not fully represent motion perturbations that are commonly perceived. One of these artifacts is foot skating: the unnatural foot slides on the ground during locomotion. In this work-in-progress paper, we tested how well the Fréchet Motion Distance (FMD), which was proposed in previous works, is able to measure foot skating artifacts, and we found that FMD is not able to measure efficiently the intensity of the skating degradation.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131368053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI is becoming increasingly popular in artistic work. Yet tools for calculating environmental impact of AI are more adapted for other contexts than creative practices, making them sometimes hard to comprehend for the non-expert. In this study, based on interviews with AI artists, a design artifact called The Green Notebook was developed: a physical notebook where the AI artist could discuss ideas and receive feedback of their expected environmental impact. The conversational experience between the artist and the interface was informed by online content analysis of artistic work processes. The Notebook was explored and assessed with five artists in Wizard-of-Oz and focus group studies. Generally, the participants found a co-creation process with the enhanced ability to reflect on sustainability an accessible way to engage with sustainability considerations of their AI artistic practices. We provide insights of the Notebook’s perceived role and the conversational strategies used by the artists. Furthermore, we discuss trade-offs between politeness vs. efficiency and focus vs. integration to inform future research.
{"title":"The Green Notebook - A Co-Creativity Partner for Facilitating Sustainability Reflection","authors":"Yawen Deng, Petra Jääskeläinen, V. Popova","doi":"10.1145/3573381.3596465","DOIUrl":"https://doi.org/10.1145/3573381.3596465","url":null,"abstract":"AI is becoming increasingly popular in artistic work. Yet tools for calculating environmental impact of AI are more adapted for other contexts than creative practices, making them sometimes hard to comprehend for the non-expert. In this study, based on interviews with AI artists, a design artifact called The Green Notebook was developed: a physical notebook where the AI artist could discuss ideas and receive feedback of their expected environmental impact. The conversational experience between the artist and the interface was informed by online content analysis of artistic work processes. The Notebook was explored and assessed with five artists in Wizard-of-Oz and focus group studies. Generally, the participants found a co-creation process with the enhanced ability to reflect on sustainability an accessible way to engage with sustainability considerations of their AI artistic practices. We provide insights of the Notebook’s perceived role and the conversational strategies used by the artists. Furthermore, we discuss trade-offs between politeness vs. efficiency and focus vs. integration to inform future research.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114935995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Victor Roger, Yves Duvivier, Matthieu Perreira da Silva, Yannick Prié
Experiments in immersive environments allow the collection of large amounts of data that are closely related to individual behaviour. The recording of such experiments allows for the complex study of under-constrained tasks. That is, tasks that allow for a high degree of contingency in their resolution. This contingency allows for better discrimination of individual behaviour. However, the high complexity of the tasks makes them difficult to analyse. My thesis aims to discuss the advantages of Immersive Analytics for analysing hybrid sequential data (trajectory and events) generated in immersive environments. The analysis needs to be performed at a very high level of abstraction due to the high contingency of behaviours extracted from immersive environments. The massive amount of data generated highlights the need to build a model that allows feature extraction at a high level of abstraction. Since the exploration scheme is unknown in advance, the visualisations provided to the analyst should be highly interactive and adaptable to follow the analyst’s queries as he or she searches for new insights in the data.
{"title":"AI-Human Collaboration for in Situ Interactive Exploration of Behaviours From Immersive Environment","authors":"Victor Roger, Yves Duvivier, Matthieu Perreira da Silva, Yannick Prié","doi":"10.1145/3573381.3596506","DOIUrl":"https://doi.org/10.1145/3573381.3596506","url":null,"abstract":"Experiments in immersive environments allow the collection of large amounts of data that are closely related to individual behaviour. The recording of such experiments allows for the complex study of under-constrained tasks. That is, tasks that allow for a high degree of contingency in their resolution. This contingency allows for better discrimination of individual behaviour. However, the high complexity of the tasks makes them difficult to analyse. My thesis aims to discuss the advantages of Immersive Analytics for analysing hybrid sequential data (trajectory and events) generated in immersive environments. The analysis needs to be performed at a very high level of abstraction due to the high contingency of behaviours extracted from immersive environments. The massive amount of data generated highlights the need to build a model that allows feature extraction at a high level of abstraction. Since the exploration scheme is unknown in advance, the visualisations provided to the analyst should be highly interactive and adaptable to follow the analyst’s queries as he or she searches for new insights in the data.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125473270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In traditional subjective video quality experiments, the presented sequences are short and quality ratings are based on a single interaction with a service (i.e., one session). However, in real-life scenarios, users interact with a video service for a longer period of time. If decisions are made, such as to abandon a service, they are formulated based on longitudinal multi-episodic interaction. Therefore, it is important to better understand how quality is perceived in a longer interaction and how quality perception is linked to behavioral implications. My PhD work encompasses a longitudinal study of users’ interactions with a video service using a mobile device. In our study, which consists of six phases, we use different study designs to investigate how users perceive quality in a more ecologically valid setting. The study is carried out using a previously validated setup, which consists of compression software and a mobile application.
{"title":"Quality Assessment of Video Services in the Long Term","authors":"Natalia Cieplińska, L. Janowski, K. De Moor","doi":"10.1145/3573381.3596509","DOIUrl":"https://doi.org/10.1145/3573381.3596509","url":null,"abstract":"In traditional subjective video quality experiments, the presented sequences are short and quality ratings are based on a single interaction with a service (i.e., one session). However, in real-life scenarios, users interact with a video service for a longer period of time. If decisions are made, such as to abandon a service, they are formulated based on longitudinal multi-episodic interaction. Therefore, it is important to better understand how quality is perceived in a longer interaction and how quality perception is linked to behavioral implications. My PhD work encompasses a longitudinal study of users’ interactions with a video service using a mobile device. In our study, which consists of six phases, we use different study designs to investigate how users perceive quality in a more ecologically valid setting. The study is carried out using a previously validated setup, which consists of compression software and a mobile application.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134183338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingwen Zhu, Ali Ak, Charles Dormeval, P. Le Callet, K. Rahul, S. Sethuraman
Quality of Experience (QoE) in video streaming scenarios is significantly affected by the viewing environment and display device. Understanding and measuring the impact of these settings on QoE can help develop viewing environment-aware metrics and improve the efficiency of video streaming services. In this ongoing work, we conducted a subjective study in both laboratory and home settings using the same content and design to measure QoE in Degradation Category Rating (DCR). We first analyzed subject inconsistency and confidence intervals of the Mean Opinion Scores (MOS) between the two settings. We then used statistical models such as ANOVA and t-test to analyze the differences in subjective tests on video quality between the two viewing environments. Additionally, we employed the Eliminated-By-Aspects (EBA) model to quantify the influence of different settings on the measured QoE. We conclude with several research questions that could be further explored to better understand the impact of the viewing environment on QoE.
{"title":"Subjective Test Environments: A Multifaceted Examination of Their Impact on Test Results","authors":"Jingwen Zhu, Ali Ak, Charles Dormeval, P. Le Callet, K. Rahul, S. Sethuraman","doi":"10.1145/3573381.3596470","DOIUrl":"https://doi.org/10.1145/3573381.3596470","url":null,"abstract":"Quality of Experience (QoE) in video streaming scenarios is significantly affected by the viewing environment and display device. Understanding and measuring the impact of these settings on QoE can help develop viewing environment-aware metrics and improve the efficiency of video streaming services. In this ongoing work, we conducted a subjective study in both laboratory and home settings using the same content and design to measure QoE in Degradation Category Rating (DCR). We first analyzed subject inconsistency and confidence intervals of the Mean Opinion Scores (MOS) between the two settings. We then used statistical models such as ANOVA and t-test to analyze the differences in subjective tests on video quality between the two viewing environments. Additionally, we employed the Eliminated-By-Aspects (EBA) model to quantify the influence of different settings on the measured QoE. We conclude with several research questions that could be further explored to better understand the impact of the viewing environment on QoE.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114544472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Ak, Anne-Flore Perrin, Denise Noyes, I. Katsavounidis, P. Le Callet
User expectations are one of the main factors on providing satisfactory QoE for streaming service providers. Measuring acceptability and annoyance of video content, therefore, provide a valuable insight when measured under a given context. In this ongoing work, we measure video QoE in terms of acceptability and annoyance for the remaining data in a mobile data plan context.. We show that simple logos can be used during the experiment to prompt the context to subjects and the different context levels may impact the user expectations and consequently their satisfactions. Finally, we show that objective metrics can be used to determine the acceptability and annoyance thresholds for a given context.
{"title":"Video Consumption in Context: Influence of Data Plan Consumption on QoE","authors":"Ali Ak, Anne-Flore Perrin, Denise Noyes, I. Katsavounidis, P. Le Callet","doi":"10.1145/3573381.3596474","DOIUrl":"https://doi.org/10.1145/3573381.3596474","url":null,"abstract":"User expectations are one of the main factors on providing satisfactory QoE for streaming service providers. Measuring acceptability and annoyance of video content, therefore, provide a valuable insight when measured under a given context. In this ongoing work, we measure video QoE in terms of acceptability and annoyance for the remaining data in a mobile data plan context.. We show that simple logos can be used during the experiment to prompt the context to subjects and the different context levels may impact the user expectations and consequently their satisfactions. Finally, we show that objective metrics can be used to determine the acceptability and annoyance thresholds for a given context.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115771305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social interactions and communication play a crucial role in people’s lives. Those with autism spectrum disorder (ASD), especially children, may have difficulties participating in social interactions. Such challenges can be characterised by displaying atypical behaviours and limited sharing intention in social settings. Sharing is an important part of social interaction, and a lack of awareness or limited willingness to share undermines the development of social skills. These characteristics may be related to the impaired theory of mind (ToM). This means that it is difficult to understand people’s wishes and feelings. A range of interventions have been created to help develop social communication skills. The Social Story™ intervention is one such example, and it provides clear visual narratives to explain social situations and concepts to help children with ASD. The narratives provide a mechanism to visually communicate typical communication behaviours. The social story intervention approach is book-based. As such, it is dependent on a reader to communicate well the concepts and demands a certain level with respect to the listener’s imagination capacity. With the limitation of the paper-based medium in mind, this work-in-progress paper outlines the steps, approach, and end application to translate the Social Story™ into a virtual reality (VR) experience. The Social Story™ experience in VR potentially offers a more interactive, immersive and flexible intervention.
{"title":"A VR Intervention Based on Social Story™ to Develop Social Skills in Children with ASD","authors":"Yujing Zhang, Conor Keighrey, Niall Murray","doi":"10.1145/3573381.3596459","DOIUrl":"https://doi.org/10.1145/3573381.3596459","url":null,"abstract":"Social interactions and communication play a crucial role in people’s lives. Those with autism spectrum disorder (ASD), especially children, may have difficulties participating in social interactions. Such challenges can be characterised by displaying atypical behaviours and limited sharing intention in social settings. Sharing is an important part of social interaction, and a lack of awareness or limited willingness to share undermines the development of social skills. These characteristics may be related to the impaired theory of mind (ToM). This means that it is difficult to understand people’s wishes and feelings. A range of interventions have been created to help develop social communication skills. The Social Story™ intervention is one such example, and it provides clear visual narratives to explain social situations and concepts to help children with ASD. The narratives provide a mechanism to visually communicate typical communication behaviours. The social story intervention approach is book-based. As such, it is dependent on a reader to communicate well the concepts and demands a certain level with respect to the listener’s imagination capacity. With the limitation of the paper-based medium in mind, this work-in-progress paper outlines the steps, approach, and end application to translate the Social Story™ into a virtual reality (VR) experience. The Social Story™ experience in VR potentially offers a more interactive, immersive and flexible intervention.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121830496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}