The paper is based on the developments of the research project “Paradigms of Ubiquitous Computing”, funded by the Swiss National Science Foundation (SNSF, 100016_185436 / 1), 2019-23. It investigates the impact of environmentally embedded sensor-actuator systems on humans. With a critical stance, we examine human-machine interfaces to make quantitative statements about behavior patterns and attitudes, only based on their physical interactions with a responsive environment. By staging different paradigms of Ubiquitous Computing in an experimental setup and evaluating them with test persons, we aim to gain insights into the human experience and appropriation of immersive and sometimes challenging situations. The artistic approach is based on strategies of New Media Art and Speculative Design and is not aligned with processes commonly used in applied research and development. The evaluation design is based on mixed methods with a strong emphasis on semantic differentials to quantify user interactions with electronically enhanced devices and furnishings. The focus is on interaction design strategies and evaluation design methods.
{"title":"Detecting Human Attitudes through Interactions with Responsive Environments","authors":"Jan Torpus, C. Spindler, Jonas Kellermeyer","doi":"10.1145/3573381.3596160","DOIUrl":"https://doi.org/10.1145/3573381.3596160","url":null,"abstract":"The paper is based on the developments of the research project “Paradigms of Ubiquitous Computing”, funded by the Swiss National Science Foundation (SNSF, 100016_185436 / 1), 2019-23. It investigates the impact of environmentally embedded sensor-actuator systems on humans. With a critical stance, we examine human-machine interfaces to make quantitative statements about behavior patterns and attitudes, only based on their physical interactions with a responsive environment. By staging different paradigms of Ubiquitous Computing in an experimental setup and evaluating them with test persons, we aim to gain insights into the human experience and appropriation of immersive and sometimes challenging situations. The artistic approach is based on strategies of New Media Art and Speculative Design and is not aligned with processes commonly used in applied research and development. The evaluation design is based on mixed methods with a strong emphasis on semantic differentials to quantify user interactions with electronically enhanced devices and furnishings. The focus is on interaction design strategies and evaluation design methods.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127083360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Iris Jennes, Elias Blanckaert, Wendy Van den Broeck
In this paper, we aim to contribute to the understanding of how readers experience immersion in digital reading experiences, more specifically with digital reading supported by (3D-)audio tracks. We formulate user and content requirements for implementing (3D-)audio soundtracks for readers in a digital reading application. The main research question addressed in this paper is: (how) can audio aid the immersion of readers in digital fiction stories? To answer this question, three online focus group discussions were organised in Belgium and Germany. As part of the set-up of the Horizon Europe project Möbius, 18 participants tested different 3D-audio tracks while reading via the Thorium Reader application. The results first address how participants define immersion, and how the role of audio in immersion can become paradoxical. Then, the paper presents a detailed evaluation of the factors en- or disabling immersion for the specific 3D-audio tracks, and how these insights can be implemented in reading apps via user and content requirements.
{"title":"Immersion or Disruption?: Readers’ Evaluation of and Requirements for (3D-)audio as a Tool to Support Immersion in Digital Reading Practices.","authors":"Iris Jennes, Elias Blanckaert, Wendy Van den Broeck","doi":"10.1145/3573381.3596151","DOIUrl":"https://doi.org/10.1145/3573381.3596151","url":null,"abstract":"In this paper, we aim to contribute to the understanding of how readers experience immersion in digital reading experiences, more specifically with digital reading supported by (3D-)audio tracks. We formulate user and content requirements for implementing (3D-)audio soundtracks for readers in a digital reading application. The main research question addressed in this paper is: (how) can audio aid the immersion of readers in digital fiction stories? To answer this question, three online focus group discussions were organised in Belgium and Germany. As part of the set-up of the Horizon Europe project Möbius, 18 participants tested different 3D-audio tracks while reading via the Thorium Reader application. The results first address how participants define immersion, and how the role of audio in immersion can become paradoxical. Then, the paper presents a detailed evaluation of the factors en- or disabling immersion for the specific 3D-audio tracks, and how these insights can be implemented in reading apps via user and content requirements.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127260522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to unlock the full potential of Extended Reality (XR) and its application to societal sectors such as health (e.g., training) or Industry 5.0 (e.g., remote control of infrastructure) there is a need for very realistic environments to enhance the presence of the user. However, current photo-realistic content generation methods (such as Light Fields) require a massive amount of data transmission (i.e., ultra-high bandwidths) and extreme computational power for displaying. Thus, they are not suited for interactive immersive and realistic applications. In this research, we hypothesize that is possible to generate realistic dynamic 3D environments by means of Deep Generative Networks. The work will consist of two parts: (1) a computer vision system that generates the 3D environment based on 2D images, and (2) a Human-Computer Interaction system (HCI) that predicts Region of Interest (RoI) for efficient 3D rendering, subjective and objective assessment of user perception (by means of presence) to enhance the 3D scene quality. This work aims to gain insights into how well deep generative methods can create realistic and immersive environments. This will significantly help future developments in realistic and immersive XR content creation.
{"title":"Human-Centered and AI-driven Generation of 6-DoF Extended Reality","authors":"Jit Chatterjee, Maria Torres Vega","doi":"10.1145/3573381.3597232","DOIUrl":"https://doi.org/10.1145/3573381.3597232","url":null,"abstract":"In order to unlock the full potential of Extended Reality (XR) and its application to societal sectors such as health (e.g., training) or Industry 5.0 (e.g., remote control of infrastructure) there is a need for very realistic environments to enhance the presence of the user. However, current photo-realistic content generation methods (such as Light Fields) require a massive amount of data transmission (i.e., ultra-high bandwidths) and extreme computational power for displaying. Thus, they are not suited for interactive immersive and realistic applications. In this research, we hypothesize that is possible to generate realistic dynamic 3D environments by means of Deep Generative Networks. The work will consist of two parts: (1) a computer vision system that generates the 3D environment based on 2D images, and (2) a Human-Computer Interaction system (HCI) that predicts Region of Interest (RoI) for efficient 3D rendering, subjective and objective assessment of user perception (by means of presence) to enhance the 3D scene quality. This work aims to gain insights into how well deep generative methods can create realistic and immersive environments. This will significantly help future developments in realistic and immersive XR content creation.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116561900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Felix Immohr, Gareth Rendle, A. Neidhardt, Steve Göring, Rakesh Rao Ramachandra Rao, Stephanie Arévalo Arboleda, Bernd Froehlich, Alexander Raake
This paper presents a proof-of-concept study conducted to analyze the effect of simple diotic vs. spatial, position-dynamic binaural synthesis on social presence in VR, in comparison with face-to-face communication in the real world, for a sample two-party scenario. A conversational task with shared visual reference was realized. The collected data includes questionnaires for direct assessment, tracking data, and audio and video recordings of the individual participants’ sessions for indirect evaluation. While tendencies for improvements with binaural over diotic presentation can be observed, no significant difference in social presence was found for the considered scenario. The gestural analysis revealed that participants used the same amount and type of gestures in face-to-face as in VR, highlighting the importance of non-verbal behavior in communication. As part of the research, an end-to-end framework for conducting communication studies and analysis has been developed.
{"title":"Proof-of-Concept Study to Evaluate the Impact of Spatial Audio on Social Presence and User Behavior in Multi-Modal VR Communication","authors":"Felix Immohr, Gareth Rendle, A. Neidhardt, Steve Göring, Rakesh Rao Ramachandra Rao, Stephanie Arévalo Arboleda, Bernd Froehlich, Alexander Raake","doi":"10.1145/3573381.3596458","DOIUrl":"https://doi.org/10.1145/3573381.3596458","url":null,"abstract":"This paper presents a proof-of-concept study conducted to analyze the effect of simple diotic vs. spatial, position-dynamic binaural synthesis on social presence in VR, in comparison with face-to-face communication in the real world, for a sample two-party scenario. A conversational task with shared visual reference was realized. The collected data includes questionnaires for direct assessment, tracking data, and audio and video recordings of the individual participants’ sessions for indirect evaluation. While tendencies for improvements with binaural over diotic presentation can be observed, no significant difference in social presence was found for the considered scenario. The gestural analysis revealed that participants used the same amount and type of gestures in face-to-face as in VR, highlighting the importance of non-verbal behavior in communication. As part of the research, an end-to-end framework for conducting communication studies and analysis has been developed.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116815197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Á. Bakk, B. Tölgyesi, Máté Barkóczi, Balázs Buri, András Szabó, Botond Tobai, Iva Georgieva, Christian Roth
In this paper we present the design process of a virtual reality experience the aim of which is to have a restorative effect on users. In the simulated natural site, the user can interact with some elements of the environment and can also explore the view. We describe how we tried to create a more realistic sense of nature by relying on high quality graphics, the use of free-roaming space, and naturalistic interactions. During the design process we avoided gameful interactions and instead created playful interactions, while also relying on the multimodal aspect of the virtual reality technology.
{"title":"Zenctuary VR: Simulating Nature in an Interactive Virtual Reality Application: Description of the design process of creating a garden in Virtual Reality with the aim of testing its restorative effects.","authors":"Á. Bakk, B. Tölgyesi, Máté Barkóczi, Balázs Buri, András Szabó, Botond Tobai, Iva Georgieva, Christian Roth","doi":"10.1145/3573381.3597215","DOIUrl":"https://doi.org/10.1145/3573381.3597215","url":null,"abstract":"In this paper we present the design process of a virtual reality experience the aim of which is to have a restorative effect on users. In the simulated natural site, the user can interact with some elements of the environment and can also explore the view. We describe how we tried to create a more realistic sense of nature by relying on high quality graphics, the use of free-roaming space, and naturalistic interactions. During the design process we avoided gameful interactions and instead created playful interactions, while also relying on the multimodal aspect of the virtual reality technology.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114197922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Sheremetieva, Ihor Romanovych, Sam Frish, M. Maksymenko, Orestis Georgiou
This paper describes an interactive multimodal and multisensory fortune-telling experience for digital signage applications that combines digital human agents along with touchless haptic technology and gesture recognition. For the first time, human-to-digital human interaction is mediated through hand gesture input and mid-air haptic feedback, motivating further research into multimodal and multisensory location-based experiences using these and related technologies. We take a phenomenological approach and present our design process, the system architecture, and discuss our gained insights, along with some of the challenges and opportunities we have encountered during this exercise. Finally, we use our singular implementation as a paradigm as a proxy for discussing complex aspects such as privacy, consent, gender neutrality, and the use of digital non-fungible tokens at the phygital border of the metaverse.
{"title":"What’s my future: a Multisensory and Multimodal Digital Human Agent Interactive Experience","authors":"Anna Sheremetieva, Ihor Romanovych, Sam Frish, M. Maksymenko, Orestis Georgiou","doi":"10.1145/3573381.3596161","DOIUrl":"https://doi.org/10.1145/3573381.3596161","url":null,"abstract":"This paper describes an interactive multimodal and multisensory fortune-telling experience for digital signage applications that combines digital human agents along with touchless haptic technology and gesture recognition. For the first time, human-to-digital human interaction is mediated through hand gesture input and mid-air haptic feedback, motivating further research into multimodal and multisensory location-based experiences using these and related technologies. We take a phenomenological approach and present our design process, the system architecture, and discuss our gained insights, along with some of the challenges and opportunities we have encountered during this exercise. Finally, we use our singular implementation as a paradigm as a proxy for discussing complex aspects such as privacy, consent, gender neutrality, and the use of digital non-fungible tokens at the phygital border of the metaverse.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126382393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, museums have become more interactive and immersive through the adaptation of technology within large scale art exhibitions. Due to these changes, new types of cultural experiences are more appealing to a younger audience. Despite these positive changes, some museum experiences are still primarily focused on visual art experiences, which remain out of reach to those with visual impairments. Such unimodal and visual dominated experiences restrict these users who depend on sensory feedback to experience the world around them. In this paper, the authors propose a novel VR experience which incorporates multisensory technologies. It allows individuals to engage and interact with a visual artwork museum experience presented as a fully immersive VR environment. Users can interact with virtual paintings and trigger sensory zones which deliver multisensory feedback to the user. These sensory zones are unique to each painting, presenting thematic audio and smells, custom haptic feedback to feel the artwork, and lastly air, light and thermal changes in an effort to engage those with visual impairments.
{"title":"A Quality of Experience Evaluation of an Interactive Multisensory 2.5D Virtual Reality Art Exhibit","authors":"Chen Chen, Niall Murray, Conor Keighrey","doi":"10.1145/3573381.3597214","DOIUrl":"https://doi.org/10.1145/3573381.3597214","url":null,"abstract":"In recent years, museums have become more interactive and immersive through the adaptation of technology within large scale art exhibitions. Due to these changes, new types of cultural experiences are more appealing to a younger audience. Despite these positive changes, some museum experiences are still primarily focused on visual art experiences, which remain out of reach to those with visual impairments. Such unimodal and visual dominated experiences restrict these users who depend on sensory feedback to experience the world around them. In this paper, the authors propose a novel VR experience which incorporates multisensory technologies. It allows individuals to engage and interact with a visual artwork museum experience presented as a fully immersive VR environment. Users can interact with virtual paintings and trigger sensory zones which deliver multisensory feedback to the user. These sensory zones are unique to each painting, presenting thematic audio and smells, custom haptic feedback to feel the artwork, and lastly air, light and thermal changes in an effort to engage those with visual impairments.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130739211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandre Nevsky, Timothy Neate, E. Simperl, Radu-Daniel Vatavu
The consumption of digital audiovisual media is a mainstay of many people’s lives. However, people with accessibility needs often have issues accessing this content. With a view to addressing this inequality, there exists a wide range of interventions that researchers have explored to bridge this accessibility gap. Despite this work, our understanding of the capability of these interventions is poor. In this paper, we address this through a systematic review of the literature, creating a dataset of and analysing N = 181 scientific papers. We have found that certain areas have accrued a disproportionate amount of attention from the research community – for example, blind and visually impaired and d/Deaf and hard of hearing people account for of papers (N = 170). We describe challenges researchers have addressed, end-user communities of focus, and interventions examined. We conclude by evaluating gaps in the literature and areas that could use more focus on in the future.
{"title":"Accessibility Research in Digital Audiovisual Media: What Has Been Achieved and What Should Be Done Next?","authors":"Alexandre Nevsky, Timothy Neate, E. Simperl, Radu-Daniel Vatavu","doi":"10.1145/3573381.3596159","DOIUrl":"https://doi.org/10.1145/3573381.3596159","url":null,"abstract":"The consumption of digital audiovisual media is a mainstay of many people’s lives. However, people with accessibility needs often have issues accessing this content. With a view to addressing this inequality, there exists a wide range of interventions that researchers have explored to bridge this accessibility gap. Despite this work, our understanding of the capability of these interventions is poor. In this paper, we address this through a systematic review of the literature, creating a dataset of and analysing N = 181 scientific papers. We have found that certain areas have accrued a disproportionate amount of attention from the research community – for example, blind and visually impaired and d/Deaf and hard of hearing people account for of papers (N = 170). We describe challenges researchers have addressed, end-user communities of focus, and interventions examined. We conclude by evaluating gaps in the literature and areas that could use more focus on in the future.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"24 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123478743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most user studies in the QoE multimedia domain are done by asking users about quality. This approach has advantages: it obtains many answers and reduces variance by repeated measurements. However, the results obtained in this context may be different from those obtained in the real application, since quality is not asked about this often in everyday life. It is more natural is to focus on user behavior. The proposed PhD focuses on a method for performing experiments based on observations of a participant’s behavior. We address two main challenges that exist in any new experiment design: how to calculate the interval validity of the proposed method and how to analyze the obtained data. The data analysis we propose is based on psychometric functions. We propose two different experiments, one of which is already ongoing.
{"title":"Behavior as a Function of Video Quality in an Ecologically Valid Experiment","authors":"Dominika Wanat, L. Janowski, K. De Moor","doi":"10.1145/3573381.3597235","DOIUrl":"https://doi.org/10.1145/3573381.3597235","url":null,"abstract":"Most user studies in the QoE multimedia domain are done by asking users about quality. This approach has advantages: it obtains many answers and reduces variance by repeated measurements. However, the results obtained in this context may be different from those obtained in the real application, since quality is not asked about this often in everyday life. It is more natural is to focus on user behavior. The proposed PhD focuses on a method for performing experiments based on observations of a participant’s behavior. We address two main challenges that exist in any new experiment design: how to calculate the interval validity of the proposed method and how to analyze the obtained data. The data analysis we propose is based on psychometric functions. We propose two different experiments, one of which is already ongoing.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129116187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the preliminary results of an accuracy testing of the Meta Quest Pro’s eye tracker. We conducted user testing to evaluate the spatial accuracy, spatial precision and subjective performance under head-free and head-restrained conditions. Our measurements indicated an average accuracy of 1.652° with a precision of 0.699° (standard deviation) and 0.849° (root mean square) for a visual field spanning 15° during head-free. The signal quality of Quest Pro’s eye-tracker is comparable to existing AR/VR eye-tracking headsets. Notably, careful considerations are required when designing the size of scene objects, mapping areas of interest, and determining the interaction flow. Researchers should also be cautious about interpreting the fixation results when multiple targets are within close proximity. Further investigation and better specification information transparency are needed to establish its capabilities and limitations.
{"title":"A Preliminary Study of the Eye Tracker in the Meta Quest Pro","authors":"Shu Wei, Desmond Bloemers, Aitor Rovira","doi":"10.1145/3573381.3596467","DOIUrl":"https://doi.org/10.1145/3573381.3596467","url":null,"abstract":"This paper presents the preliminary results of an accuracy testing of the Meta Quest Pro’s eye tracker. We conducted user testing to evaluate the spatial accuracy, spatial precision and subjective performance under head-free and head-restrained conditions. Our measurements indicated an average accuracy of 1.652° with a precision of 0.699° (standard deviation) and 0.849° (root mean square) for a visual field spanning 15° during head-free. The signal quality of Quest Pro’s eye-tracker is comparable to existing AR/VR eye-tracking headsets. Notably, careful considerations are required when designing the size of scene objects, mapping areas of interest, and determining the interaction flow. Researchers should also be cautious about interpreting the fixation results when multiple targets are within close proximity. Further investigation and better specification information transparency are needed to establish its capabilities and limitations.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132457173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}