Jingwen Zhu, Ali Ak, Charles Dormeval, P. Le Callet, K. Rahul, S. Sethuraman
Quality of Experience (QoE) in video streaming scenarios is significantly affected by the viewing environment and display device. Understanding and measuring the impact of these settings on QoE can help develop viewing environment-aware metrics and improve the efficiency of video streaming services. In this ongoing work, we conducted a subjective study in both laboratory and home settings using the same content and design to measure QoE in Degradation Category Rating (DCR). We first analyzed subject inconsistency and confidence intervals of the Mean Opinion Scores (MOS) between the two settings. We then used statistical models such as ANOVA and t-test to analyze the differences in subjective tests on video quality between the two viewing environments. Additionally, we employed the Eliminated-By-Aspects (EBA) model to quantify the influence of different settings on the measured QoE. We conclude with several research questions that could be further explored to better understand the impact of the viewing environment on QoE.
{"title":"Subjective Test Environments: A Multifaceted Examination of Their Impact on Test Results","authors":"Jingwen Zhu, Ali Ak, Charles Dormeval, P. Le Callet, K. Rahul, S. Sethuraman","doi":"10.1145/3573381.3596470","DOIUrl":"https://doi.org/10.1145/3573381.3596470","url":null,"abstract":"Quality of Experience (QoE) in video streaming scenarios is significantly affected by the viewing environment and display device. Understanding and measuring the impact of these settings on QoE can help develop viewing environment-aware metrics and improve the efficiency of video streaming services. In this ongoing work, we conducted a subjective study in both laboratory and home settings using the same content and design to measure QoE in Degradation Category Rating (DCR). We first analyzed subject inconsistency and confidence intervals of the Mean Opinion Scores (MOS) between the two settings. We then used statistical models such as ANOVA and t-test to analyze the differences in subjective tests on video quality between the two viewing environments. Additionally, we employed the Eliminated-By-Aspects (EBA) model to quantify the influence of different settings on the measured QoE. We conclude with several research questions that could be further explored to better understand the impact of the viewing environment on QoE.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114544472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social interactions and communication play a crucial role in people’s lives. Those with autism spectrum disorder (ASD), especially children, may have difficulties participating in social interactions. Such challenges can be characterised by displaying atypical behaviours and limited sharing intention in social settings. Sharing is an important part of social interaction, and a lack of awareness or limited willingness to share undermines the development of social skills. These characteristics may be related to the impaired theory of mind (ToM). This means that it is difficult to understand people’s wishes and feelings. A range of interventions have been created to help develop social communication skills. The Social Story™ intervention is one such example, and it provides clear visual narratives to explain social situations and concepts to help children with ASD. The narratives provide a mechanism to visually communicate typical communication behaviours. The social story intervention approach is book-based. As such, it is dependent on a reader to communicate well the concepts and demands a certain level with respect to the listener’s imagination capacity. With the limitation of the paper-based medium in mind, this work-in-progress paper outlines the steps, approach, and end application to translate the Social Story™ into a virtual reality (VR) experience. The Social Story™ experience in VR potentially offers a more interactive, immersive and flexible intervention.
{"title":"A VR Intervention Based on Social Story™ to Develop Social Skills in Children with ASD","authors":"Yujing Zhang, Conor Keighrey, Niall Murray","doi":"10.1145/3573381.3596459","DOIUrl":"https://doi.org/10.1145/3573381.3596459","url":null,"abstract":"Social interactions and communication play a crucial role in people’s lives. Those with autism spectrum disorder (ASD), especially children, may have difficulties participating in social interactions. Such challenges can be characterised by displaying atypical behaviours and limited sharing intention in social settings. Sharing is an important part of social interaction, and a lack of awareness or limited willingness to share undermines the development of social skills. These characteristics may be related to the impaired theory of mind (ToM). This means that it is difficult to understand people’s wishes and feelings. A range of interventions have been created to help develop social communication skills. The Social Story™ intervention is one such example, and it provides clear visual narratives to explain social situations and concepts to help children with ASD. The narratives provide a mechanism to visually communicate typical communication behaviours. The social story intervention approach is book-based. As such, it is dependent on a reader to communicate well the concepts and demands a certain level with respect to the listener’s imagination capacity. With the limitation of the paper-based medium in mind, this work-in-progress paper outlines the steps, approach, and end application to translate the Social Story™ into a virtual reality (VR) experience. The Social Story™ experience in VR potentially offers a more interactive, immersive and flexible intervention.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121830496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Charlotte Scarpa, G. Haese, Toinon Vigier, P. Le Callet
The building sector and the indoor environment conception is undergoing major changes. There is a need to reconsider the way offices are built from a user’s centric point of view. Research has shown the influence of perceived comfort and satisfaction on performance in the workplace. By understanding how multi-sensory information is integrated into the nervous system and which environmental parameters influence the most perception, it could be possible to improve work environments. With the emergence of new virtual reality (VR) and augmented reality (AR) technologies, the collection and processing of sensory information is rapidly advancing, moving forward more dynamic aspects of sensory perception. Through simulated environments, environmental parameters can be easily manipulated at reasonable costs, allowing control and guiding the user’s sensory experience. Moreover, the effects of contextual and surrounding stimuli on users can be easily collected throughout the test, in the form of physiological and behavioral data. Through the use of indoor simulations, this doctoral research goal is to develop a multi-criteria comfort scale based on physiological indicators under performance constraints. In doing this, it would be possible to define new quality indicators combining the different physical factors adapted to the uses and space. In order to achieve the objectives of this project, the first step is to develop and validate an immersive and interactive methodology for the assessment of multisensory information on comfort and performance in work environments.
{"title":"Construction of immersive and interactive methodology based on physiological indicators to subjectively and objectively assess comfort and performances in work offices","authors":"Charlotte Scarpa, G. Haese, Toinon Vigier, P. Le Callet","doi":"10.1145/3573381.3597233","DOIUrl":"https://doi.org/10.1145/3573381.3597233","url":null,"abstract":"The building sector and the indoor environment conception is undergoing major changes. There is a need to reconsider the way offices are built from a user’s centric point of view. Research has shown the influence of perceived comfort and satisfaction on performance in the workplace. By understanding how multi-sensory information is integrated into the nervous system and which environmental parameters influence the most perception, it could be possible to improve work environments. With the emergence of new virtual reality (VR) and augmented reality (AR) technologies, the collection and processing of sensory information is rapidly advancing, moving forward more dynamic aspects of sensory perception. Through simulated environments, environmental parameters can be easily manipulated at reasonable costs, allowing control and guiding the user’s sensory experience. Moreover, the effects of contextual and surrounding stimuli on users can be easily collected throughout the test, in the form of physiological and behavioral data. Through the use of indoor simulations, this doctoral research goal is to develop a multi-criteria comfort scale based on physiological indicators under performance constraints. In doing this, it would be possible to define new quality indicators combining the different physical factors adapted to the uses and space. In order to achieve the objectives of this project, the first step is to develop and validate an immersive and interactive methodology for the assessment of multisensory information on comfort and performance in work environments.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114212086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liangding Li, Stephanie Carnell, Katherine Harris, Linda J. Walters, D. Reiners, C. Cruz-Neira
Our paper presents LIFT, a system that enables educators to create immersive virtual field trip experiences for their students. LIFT overcomes the challenges of enabling non-technical educators to create their own content and allows educators to act as guides during the immersive experience. The system combines live-streamed 360° video, 3D models, and live instruction to create collaborative virtual field trips. To evaluate LIFT, we developed a field trip with biology educators from the University of Central Florida(UCF) and showcased it at a science festival. Our results suggest that LIFT can help educators create immersive educational content while out in the field. However, our pilot observational study at the museum highlighted the need for further research to explore the instructional design of mixed immersive content created with LIFT. Overall, our work provides an application development framework for educators to create immersive, hands-on field trip experiences.
我们的论文介绍LIFT,一个系统,使教育工作者创造身临其境的虚拟实地考察经验,为他们的学生。LIFT克服了使非技术教育工作者能够创建自己的内容的挑战,并允许教育工作者在沉浸式体验中充当向导。该系统结合了360°直播视频、3D模型和现场指导,创建协作式虚拟实地考察。为了评估LIFT,我们与中佛罗里达大学(University of Central Florida, UCF)的生物学教育工作者进行了一次实地考察,并在一个科学节上展示了它。我们的研究结果表明,LIFT可以帮助教育工作者在户外创造沉浸式的教育内容。然而,我们在博物馆的试点观察研究强调了进一步研究的必要性,以探索使用LIFT创建的混合沉浸式内容的教学设计。总的来说,我们的工作为教育工作者提供了一个应用程序开发框架,以创建身临其境的动手实地考察体验。
{"title":"LIFT - A System to Create Mixed 360° Video and 3D Content for Live Immersive Virtual Field Trip","authors":"Liangding Li, Stephanie Carnell, Katherine Harris, Linda J. Walters, D. Reiners, C. Cruz-Neira","doi":"10.1145/3573381.3596162","DOIUrl":"https://doi.org/10.1145/3573381.3596162","url":null,"abstract":"Our paper presents LIFT, a system that enables educators to create immersive virtual field trip experiences for their students. LIFT overcomes the challenges of enabling non-technical educators to create their own content and allows educators to act as guides during the immersive experience. The system combines live-streamed 360° video, 3D models, and live instruction to create collaborative virtual field trips. To evaluate LIFT, we developed a field trip with biology educators from the University of Central Florida(UCF) and showcased it at a science festival. Our results suggest that LIFT can help educators create immersive educational content while out in the field. However, our pilot observational study at the museum highlighted the need for further research to explore the instructional design of mixed immersive content created with LIFT. Overall, our work provides an application development framework for educators to create immersive, hands-on field trip experiences.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126669884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The production of immersive media often involves 360-degree viewing on mobile or immersive VR devices, particularly in the field of immersive journalism. However, it is unclear how the different technologies used to present such media affect the experience of presence. To investigate this, a laboratory experiment was conducted with 87 participants who were assigned to one of three conditions: HMD-360, Monitor-360, or Monitor-article, representing three distinct levels of technological immersion. All three conditions represented the same base content, with high and mid-immersion featuring a panoramic 360-video and low-immersion presenting an article composed of a transcript and video stills. The study found that presence could be considered a composite of Involvement, Naturalness, Location, and Distraction. Mid- and high-immersion conditions elicited both higher Involvement and higher Distraction compared to low immersion. Furthermore, the participants’ propensity for psychological immersion maximized the effects of technological immersion, but only through the aspect of Involvement. In conclusion, the study sheds light on how different technologies used to present immersive media affect the experience of presence and suggests that higher technological immersiveness does not necessarily result in a higher reported presence.
{"title":"More Immersed but Less Present: Unpacking Factors of Presence Across Devices","authors":"Mila Bujić, M. Salminen, Juho Hamari","doi":"10.1145/3573381.3596152","DOIUrl":"https://doi.org/10.1145/3573381.3596152","url":null,"abstract":"The production of immersive media often involves 360-degree viewing on mobile or immersive VR devices, particularly in the field of immersive journalism. However, it is unclear how the different technologies used to present such media affect the experience of presence. To investigate this, a laboratory experiment was conducted with 87 participants who were assigned to one of three conditions: HMD-360, Monitor-360, or Monitor-article, representing three distinct levels of technological immersion. All three conditions represented the same base content, with high and mid-immersion featuring a panoramic 360-video and low-immersion presenting an article composed of a transcript and video stills. The study found that presence could be considered a composite of Involvement, Naturalness, Location, and Distraction. Mid- and high-immersion conditions elicited both higher Involvement and higher Distraction compared to low immersion. Furthermore, the participants’ propensity for psychological immersion maximized the effects of technological immersion, but only through the aspect of Involvement. In conclusion, the study sheds light on how different technologies used to present immersive media affect the experience of presence and suggests that higher technological immersiveness does not necessarily result in a higher reported presence.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117328698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Robotham, Ashutosh Singla, A. Raake, Olli S. Rummukainen, Emanuël Habets
This study uses a mixed between- and within-subjects test design to evaluate the influence of interactive formats on the quality of binaurally rendered 360° spatial audio content. Focusing on ecological validity using real-world recordings of 60 s duration, three independent groups of subjects () were exposed to three formats: audio only (A), audio with 2D visuals (A2DV), and audio with head-mounted display (AHMD) visuals. Within each interactive format, two sessions were conducted to evaluate degraded audio conditions: bit-rate and Ambisonics order. Our results show a statistically significant effect (p < .05) of format only on spatial audio quality ratings for Ambisonics order. Exploration data analysis shows that format A yields little variability in exploration, while formats A2DV and AHMD yield broader viewing distribution of 360° content. The results imply audio quality factors can be optimized depending on the interactive format.
{"title":"Influence of Multi-Modal Interactive Formats on Subjective Audio Quality and Exploration Behavior","authors":"T. Robotham, Ashutosh Singla, A. Raake, Olli S. Rummukainen, Emanuël Habets","doi":"10.1145/3573381.3596155","DOIUrl":"https://doi.org/10.1145/3573381.3596155","url":null,"abstract":"This study uses a mixed between- and within-subjects test design to evaluate the influence of interactive formats on the quality of binaurally rendered 360° spatial audio content. Focusing on ecological validity using real-world recordings of 60 s duration, three independent groups of subjects () were exposed to three formats: audio only (A), audio with 2D visuals (A2DV), and audio with head-mounted display (AHMD) visuals. Within each interactive format, two sessions were conducted to evaluate degraded audio conditions: bit-rate and Ambisonics order. Our results show a statistically significant effect (p < .05) of format only on spatial audio quality ratings for Ambisonics order. Exploration data analysis shows that format A yields little variability in exploration, while formats A2DV and AHMD yield broader viewing distribution of 360° content. The results imply audio quality factors can be optimized depending on the interactive format.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127704224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Audiovisual media is an integral part of many people’s everyday lives. People with accessibility needs, especially people with complex accessibility needs, however, may face challenges accessing this content. This doctoral work addresses this problem by investigating how complex accessibility needs can be met by content personalisation by leveraging data-driven methods. To this end, I will collaborate with people with aphasia, a complex language impairment, as an exemplar community of people with complex accessibility needs. To better understand the needs of people with aphasia, I will use collaborative design techniques to meet the needs of end users. This will involve them in the design, development and evaluation of systems that demonstrate the benefits of content personalisation as an accessibility intervention. This paper outlines the background and motivation to this PhD, the work that has already been completed, and current planned future work.
{"title":"Object-Based Access: Enhancing Accessibility with Data-Driven Media","authors":"Alexandre Nevsky","doi":"10.1145/3573381.3596500","DOIUrl":"https://doi.org/10.1145/3573381.3596500","url":null,"abstract":"Audiovisual media is an integral part of many people’s everyday lives. People with accessibility needs, especially people with complex accessibility needs, however, may face challenges accessing this content. This doctoral work addresses this problem by investigating how complex accessibility needs can be met by content personalisation by leveraging data-driven methods. To this end, I will collaborate with people with aphasia, a complex language impairment, as an exemplar community of people with complex accessibility needs. To better understand the needs of people with aphasia, I will use collaborative design techniques to meet the needs of end users. This will involve them in the design, development and evaluation of systems that demonstrate the benefits of content personalisation as an accessibility intervention. This paper outlines the background and motivation to this PhD, the work that has already been completed, and current planned future work.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122898774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accessing match statistics through second screen while watching soccer matches on TV has grown into a popular practice. Although early works have shown how gestures on touch screens performed under distracting environments, little is known regarding how specific gestures (swiping and tapping) to retrieve information on second screen affect the viewing experience of soccer games on TV. For this, a mixed-method user study, which included prototype tests of watching short clips of a soccer match, questionnaires and short interviews, was conducted with 28 participants. The results revealed that the number of people who preferred tapping was more than the number of people who favored swiping under two different second screen activity time scenarios i.e. On-Play or Off-Play. However, neither swiping nor tapping yield better performance of recalling verbatim match stats and exact comparisons in both On-Play and Off-Play. Participant evaluations in On-Play and interviews give us clues regarding such difference.
{"title":"Tap or Swipe? Effects of Interaction Gestures for Retrieval of Match Statistics via Second Screen on Watching Soccer on TV","authors":"Ege Sezen, Emmanuel Tsekleves, A. Mauthe","doi":"10.1145/3573381.3596473","DOIUrl":"https://doi.org/10.1145/3573381.3596473","url":null,"abstract":"Accessing match statistics through second screen while watching soccer matches on TV has grown into a popular practice. Although early works have shown how gestures on touch screens performed under distracting environments, little is known regarding how specific gestures (swiping and tapping) to retrieve information on second screen affect the viewing experience of soccer games on TV. For this, a mixed-method user study, which included prototype tests of watching short clips of a soccer match, questionnaires and short interviews, was conducted with 28 participants. The results revealed that the number of people who preferred tapping was more than the number of people who favored swiping under two different second screen activity time scenarios i.e. On-Play or Off-Play. However, neither swiping nor tapping yield better performance of recalling verbatim match stats and exact comparisons in both On-Play and Off-Play. Participant evaluations in On-Play and interviews give us clues regarding such difference.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116088282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This demonstration aims at presenting ScenaConnect, a multisensory device which allows people to live various several multisensory experiences. ScenaConnect is inexpensive, compact, easy to install and allows to improve experiences in added new interactions. The demonstration will present two cases of use. The first one is an interactive math exercise and the second one is a multisensory experience that will take the visitor on a journey through history. Moreover, ScenaConnect could be used in museums for immersive and interactive experiences or by a teacher who can use it to make the learning of his students more interactive and adapted. The perspectives are to allows non-expert in computer science to quickly integrate ScenaConnect in several and various experiences thanks to the software ScenaProd, which is, like ScenaConnect, a goal of the PRIM project presented in more detail on this paper.
{"title":"ScenaConnect: an original device to enhance experiences with multisensoriality","authors":"Justin Debloos, C. Jost, D. Archambault","doi":"10.1145/3573381.3597225","DOIUrl":"https://doi.org/10.1145/3573381.3597225","url":null,"abstract":"This demonstration aims at presenting ScenaConnect, a multisensory device which allows people to live various several multisensory experiences. ScenaConnect is inexpensive, compact, easy to install and allows to improve experiences in added new interactions. The demonstration will present two cases of use. The first one is an interactive math exercise and the second one is a multisensory experience that will take the visitor on a journey through history. Moreover, ScenaConnect could be used in museums for immersive and interactive experiences or by a teacher who can use it to make the learning of his students more interactive and adapted. The perspectives are to allows non-expert in computer science to quickly integrate ScenaConnect in several and various experiences thanks to the software ScenaProd, which is, like ScenaConnect, a goal of the PRIM project presented in more detail on this paper.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132112239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper contains the research proposal of Juan Antonio De Rus presented at the IMX 23 Doctoral Symposium. Virtual Reality (VR) applications are already used to support diverse tasks such as online meetings, education, or training, and the usages grow every year. To enrich the experience VR scenarios, include multimodal content (video, audio, text, synthetic content) and multi-sensory stimuli are typically included. Tools to evaluate the Quality of Experience (QoE) of such scenarios are needed. Traditional tools used to evaluate the QoE of users performing any kind of task typically involves surveys, user testing or analytics. However, these methods provide limited insights for our tasks with VR and have shortcomings and a limited scalability. In this doctoral study we have formulated a set of open research questions and objectives on which we plan to generate contributions and knowledge in the field of Affective Computing (AC) and Multimodal Interactive Virtual Environments. Hence, in this paper we present a set of tools we are developing to automatically evaluate QoE in different use cases. They include dashboards to monitor in real time reactions to different events in the form of emotions and affections predicted by different models based on physiological data, as well as the creation of a dataset for AC and its associated methodology.
本文包含Juan Antonio De Rus在imx23博士研讨会上提出的研究计划。虚拟现实(VR)应用程序已经用于支持各种任务,如在线会议、教育或培训,并且其使用每年都在增长。为了丰富VR场景的体验,通常包括多模态内容(视频、音频、文本、合成内容)和多感官刺激。需要评估这些场景的体验质量(QoE)的工具。用于评估用户执行任何类型任务的QoE的传统工具通常包括调查、用户测试或分析。然而,这些方法对我们的VR任务提供的见解有限,并且存在缺点和有限的可扩展性。在这项博士研究中,我们制定了一套开放的研究问题和目标,我们计划在情感计算(AC)和多模态交互虚拟环境领域产生贡献和知识。因此,在本文中,我们展示了一组我们正在开发的工具,用于在不同用例中自动评估QoE。它们包括仪表板,用于实时监控人们对不同事件的反应,这些反应以基于生理数据的不同模型预测的情绪和情感的形式出现,以及创建AC数据集及其相关方法。
{"title":"Towards the Creation of Tools for Automatic Quality of Experience Evaluation with Focus on Interactive Virtual Environments","authors":"Juan Antonio De Rus Arance, M. Montagud, M. Cobos","doi":"10.1145/3573381.3596508","DOIUrl":"https://doi.org/10.1145/3573381.3596508","url":null,"abstract":"This paper contains the research proposal of Juan Antonio De Rus presented at the IMX 23 Doctoral Symposium. Virtual Reality (VR) applications are already used to support diverse tasks such as online meetings, education, or training, and the usages grow every year. To enrich the experience VR scenarios, include multimodal content (video, audio, text, synthetic content) and multi-sensory stimuli are typically included. Tools to evaluate the Quality of Experience (QoE) of such scenarios are needed. Traditional tools used to evaluate the QoE of users performing any kind of task typically involves surveys, user testing or analytics. However, these methods provide limited insights for our tasks with VR and have shortcomings and a limited scalability. In this doctoral study we have formulated a set of open research questions and objectives on which we plan to generate contributions and knowledge in the field of Affective Computing (AC) and Multimodal Interactive Virtual Environments. Hence, in this paper we present a set of tools we are developing to automatically evaluate QoE in different use cases. They include dashboards to monitor in real time reactions to different events in the form of emotions and affections predicted by different models based on physiological data, as well as the creation of a dataset for AC and its associated methodology.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"168 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133519284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}