M. Rebol, C. Ranniger, C. Hood, E. Horan, A. Rutenberg, N. Sikka, Y. Ajabnoor, Safinaz Alshikah, Krzysztof Pietroszek
We design a volumetric communication system for remote assistance of procedural medical tasks. The system allows a remote expert to spatially guide a local operator using a real-time volumetric representation of the patient. Guidance is provided by voice, virtual hand metaphor, and annotations performed in situ. We include the feedback we received from the medical professionals and early NASA TLX [5] data on the cognitive load of the system.
{"title":"Mixed Reality Communication System for Procedural Tasks","authors":"M. Rebol, C. Ranniger, C. Hood, E. Horan, A. Rutenberg, N. Sikka, Y. Ajabnoor, Safinaz Alshikah, Krzysztof Pietroszek","doi":"10.1145/3531073.3534497","DOIUrl":"https://doi.org/10.1145/3531073.3534497","url":null,"abstract":"We design a volumetric communication system for remote assistance of procedural medical tasks. The system allows a remote expert to spatially guide a local operator using a real-time volumetric representation of the patient. Guidance is provided by voice, virtual hand metaphor, and annotations performed in situ. We include the feedback we received from the medical professionals and early NASA TLX [5] data on the cognitive load of the system.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125825887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yasmeen Abdrabou, S. Rivu, Tarek Ammar, Jonathan Liebers, Alia Saad, C. Liebers, Uwe Gruenefeld, Pascal Knierim, M. Khamis, Ville Makela, Stefan Schneegass, Florian Alt
In this work, we explore attacker behavior during shoulder surfing. As such behavior is often opportunistic and difficult to observe in real world settings, we leverage the capabilities of virtual reality (VR). We recruited 24 participants and observed their behavior in two virtual waiting scenarios: at a bus stop and in an open office space. In both scenarios, participants shoulder surfed private screens displaying different types of content. From the results we derive an understanding of factors influencing shoulder surfing behavior, reveal common attack patterns, and sketch a behavioral shoulder surfing model. Our work suggests directions for future research on shoulder surfing and can serve as a basis for creating novel approaches to mitigate shoulder surfing.
{"title":"Understanding Shoulder Surfer Behavior and Attack Patterns Using Virtual Reality","authors":"Yasmeen Abdrabou, S. Rivu, Tarek Ammar, Jonathan Liebers, Alia Saad, C. Liebers, Uwe Gruenefeld, Pascal Knierim, M. Khamis, Ville Makela, Stefan Schneegass, Florian Alt","doi":"10.1145/3531073.3531106","DOIUrl":"https://doi.org/10.1145/3531073.3531106","url":null,"abstract":"In this work, we explore attacker behavior during shoulder surfing. As such behavior is often opportunistic and difficult to observe in real world settings, we leverage the capabilities of virtual reality (VR). We recruited 24 participants and observed their behavior in two virtual waiting scenarios: at a bus stop and in an open office space. In both scenarios, participants shoulder surfed private screens displaying different types of content. From the results we derive an understanding of factors influencing shoulder surfing behavior, reveal common attack patterns, and sketch a behavioral shoulder surfing model. Our work suggests directions for future research on shoulder surfing and can serve as a basis for creating novel approaches to mitigate shoulder surfing.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121424909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zahra Aminolroaya, Wesley Willett, Samuel Wiebe, C. Josephson, F. Maurer
We present a video-based approach for collecting feedback on virtual reality (VR) prototypes. While developing a high-fidelity VR prototype to help neurologists analyze seizure propagation information for brain surgery planning, our neurologist collaborators’ limited availability reduced opportunities for them to give feedback on critical design decisions. In response, we developed a remote feedback process in which developers created videos of the VR design process and used these to ground iterative input from neurologist collaborators. We describe our approach and detail opportunities and challenges for video-based feedback to play a role in future VR prototyping.
{"title":"Watch The Videos Whenever You Have Time: Asynchronously Involving Neurologists in VR Prototyping","authors":"Zahra Aminolroaya, Wesley Willett, Samuel Wiebe, C. Josephson, F. Maurer","doi":"10.1145/3531073.3531181","DOIUrl":"https://doi.org/10.1145/3531073.3531181","url":null,"abstract":"We present a video-based approach for collecting feedback on virtual reality (VR) prototypes. While developing a high-fidelity VR prototype to help neurologists analyze seizure propagation information for brain surgery planning, our neurologist collaborators’ limited availability reduced opportunities for them to give feedback on critical design decisions. In response, we developed a remote feedback process in which developers created videos of the VR design process and used these to ground iterative input from neurologist collaborators. We describe our approach and detail opportunities and challenges for video-based feedback to play a role in future VR prototyping.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133532547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work proposes an implicit interaction approach to ease implementing basic car-related tasks on a smartphone application. Many car drivers use apps on their smartphones to get support in typical tasks related to car usage, yet some of the available apps have a poor user experience because they require the user’s attention, causing a distraction while driving. In addition, they often rely on users inputting relevant data repetitively. Implicit interaction is a possible solution to improve the user experience of car-related interfaces. Basic user tasks for many car applications are (i) reporting parking the car in a specific position, (ii) declaring that the user will soon free a parking spot, and (iii) that a new trip with the car has begun (thus, that a parking spot became free). The proposed context-aware interaction approach to executing these tasks is described together with its implementation in an application that leverages the smartphone’s sensing capability of users’ locations and motion activities and merges them to infer parking and unparking events.
{"title":"Implicit Interaction Approach for Car-related Tasks On Smartphone Applications","authors":"Alba Bisante, Emanuele Panizzi, Stefano Zeppieri","doi":"10.1145/3531073.3531173","DOIUrl":"https://doi.org/10.1145/3531073.3531173","url":null,"abstract":"This work proposes an implicit interaction approach to ease implementing basic car-related tasks on a smartphone application. Many car drivers use apps on their smartphones to get support in typical tasks related to car usage, yet some of the available apps have a poor user experience because they require the user’s attention, causing a distraction while driving. In addition, they often rely on users inputting relevant data repetitively. Implicit interaction is a possible solution to improve the user experience of car-related interfaces. Basic user tasks for many car applications are (i) reporting parking the car in a specific position, (ii) declaring that the user will soon free a parking spot, and (iii) that a new trip with the car has begun (thus, that a parking spot became free). The proposed context-aware interaction approach to executing these tasks is described together with its implementation in an application that leverages the smartphone’s sensing capability of users’ locations and motion activities and merges them to infer parking and unparking events.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131694045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In fields such as translation studies and computational linguistics, various tools are used to analyze the content of text corpora, and extract keywords and other entities for analysis. Concordancing – arranging passages of text corpus in alphabetical order of user-defined keywords – is one of most widely used forms of text analysis. This paper describes Multi-Mosaics, a tool for text analysis using multiple implicitly linked Concordance Mosaic visualisations. Multi-Mosaics supports examining linguistic relationships within the context windows surrounding multiple extracted keywords.
{"title":"Corpus Summarization and Exploration using Multi-Mosaics","authors":"Shane Sheehan, S. Luz, M. Masoodian","doi":"10.1145/3531073.3534468","DOIUrl":"https://doi.org/10.1145/3531073.3534468","url":null,"abstract":"In fields such as translation studies and computational linguistics, various tools are used to analyze the content of text corpora, and extract keywords and other entities for analysis. Concordancing – arranging passages of text corpus in alphabetical order of user-defined keywords – is one of most widely used forms of text analysis. This paper describes Multi-Mosaics, a tool for text analysis using multiple implicitly linked Concordance Mosaic visualisations. Multi-Mosaics supports examining linguistic relationships within the context windows surrounding multiple extracted keywords.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130082485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe three interactive augmented reality stories for children that we showed at the Cannes Film Festival “Marche du Film” in July 2021. The stories were developed using a novel technique: 3D modeling and animation from within the Virtual Reality. The audience at Cannes viewed and interacted with these stories using mixed reality glasses, a prototype of the Sony Spatial Display, and an AR-enabled tablet. We report on the technical development process and the feedback from the Cannes audience.
我们描述了2021年7月在戛纳电影节“Marche du Film”上展示的三个面向儿童的交互式增强现实故事。这些故事采用了一种新颖的技术:虚拟现实中的3D建模和动画。戛纳电影节的观众使用混合现实眼镜、索尼空间显示器的原型和支持ar的平板电脑观看并与这些故事互动。我们报道技术发展过程和戛纳观众的反馈。
{"title":"Interactive Volumetric Stories Across Immersive Displays","authors":"Krzysztof Pietroszek, M. Rebol, Liudmila Tahai","doi":"10.1145/3531073.3534498","DOIUrl":"https://doi.org/10.1145/3531073.3534498","url":null,"abstract":"We describe three interactive augmented reality stories for children that we showed at the Cannes Film Festival “Marche du Film” in July 2021. The stories were developed using a novel technique: 3D modeling and animation from within the Virtual Reality. The audience at Cannes viewed and interacted with these stories using mixed reality glasses, a prototype of the Sony Spatial Display, and an AR-enabled tablet. We report on the technical development process and the feedback from the Cannes audience.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"333 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122980584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simone Ghiazzi, Stefano Riva, Mattia Gianotti, Pietro Crovari, F. Garzotto
MagicMuseum is a set of team-based, immersive, full-body activities for Cultural Heritage Education of primary school children. MagicMuseum exploits the interactive and multisensory capability of the Magic Room, an indoor smart space equipped with IoT-enriched components such as floor and wall projections, smart lighting, music and sound, motion and gesture sensors, and smart objects. The paper describes MagicMuseum and briefly reports an exploratory study involving 22 children at a local primary school.
{"title":"MagicMuseum: Team-based Experiences in Interactive Smart Spaces for Cultural Heritage Education","authors":"Simone Ghiazzi, Stefano Riva, Mattia Gianotti, Pietro Crovari, F. Garzotto","doi":"10.1145/3531073.3534488","DOIUrl":"https://doi.org/10.1145/3531073.3534488","url":null,"abstract":"MagicMuseum is a set of team-based, immersive, full-body activities for Cultural Heritage Education of primary school children. MagicMuseum exploits the interactive and multisensory capability of the Magic Room, an indoor smart space equipped with IoT-enriched components such as floor and wall projections, smart lighting, music and sound, motion and gesture sensors, and smart objects. The paper describes MagicMuseum and briefly reports an exploratory study involving 22 children at a local primary school.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"446 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117313327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Francese, A. Guercio, Veronica Rossano, Deepshikha Bhati
This paper proposes a multimodal conversational interface that supports caregivers of people with ASD in the semiautomatic creation of social stories. With the support of the multimodal interface the editor lets caregivers select the appropriate elements and the content representations to be included in the social stories by following specific guidelines. We describe the interface design and a preliminary evaluation of the mobile application. Seven caregivers of people with ASD have participated in the evaluation. Usability results are encouraging.
{"title":"A Multimodal Conversational Interface to Support the creation of customized Social Stories for People with ASD","authors":"R. Francese, A. Guercio, Veronica Rossano, Deepshikha Bhati","doi":"10.1145/3531073.3531118","DOIUrl":"https://doi.org/10.1145/3531073.3531118","url":null,"abstract":"This paper proposes a multimodal conversational interface that supports caregivers of people with ASD in the semiautomatic creation of social stories. With the support of the multimodal interface the editor lets caregivers select the appropriate elements and the content representations to be included in the social stories by following specific guidelines. We describe the interface design and a preliminary evaluation of the mobile application. Seven caregivers of people with ASD have participated in the evaluation. Usability results are encouraging.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130175422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qingfeng Wu, Yixian Li, Yingying She, Fang Liu, Yan Luo, Xinyu Yang
Immersion is a powerful and important interactive experience. However, little is known about how we can facilitate immersion in Mobile Augmented Reality (MAR) applications. Establishing the relationship between the virtual and the real is considered a promising way to promote immersion. To enhance immersion in MAR, we present BRIDGE, an interaction design model which builds a bridge between virtual and reality through the following three kinds of relationships: The virtual object has a close relationship with the real environment where the user is (contextual relationship) ; the virtual object has the same physical properties as the real world (physical relationship) ; the user imitates real-world interactions by directly interacting with the virtual world with their hands (interactive relationship). To evaluate the effect of the BRIDGE model, we implement it into the application design and conduct a comparative study of 32 users, and explore the immersive user experience of contextual and non-contextual, physical and non-physical, natural-interaction and screen-touch. The quantitative and qualitative results show that virtual objects have a stronger presence and users are more immersed in the environment when there is a contextual and physical relationship and users can interact naturally. This study is the first step to having a better understanding of the characteristics that contribute to an immersive experience and how they affect human perception and the presence of virtual objects. We hope to provide design insights for MAR applications based on these results.
{"title":"Bridging Virtual and Reality in Mobile Augmented Reality Applications to Promote Immersive Experience","authors":"Qingfeng Wu, Yixian Li, Yingying She, Fang Liu, Yan Luo, Xinyu Yang","doi":"10.1145/3531073.3531122","DOIUrl":"https://doi.org/10.1145/3531073.3531122","url":null,"abstract":"Immersion is a powerful and important interactive experience. However, little is known about how we can facilitate immersion in Mobile Augmented Reality (MAR) applications. Establishing the relationship between the virtual and the real is considered a promising way to promote immersion. To enhance immersion in MAR, we present BRIDGE, an interaction design model which builds a bridge between virtual and reality through the following three kinds of relationships: The virtual object has a close relationship with the real environment where the user is (contextual relationship) ; the virtual object has the same physical properties as the real world (physical relationship) ; the user imitates real-world interactions by directly interacting with the virtual world with their hands (interactive relationship). To evaluate the effect of the BRIDGE model, we implement it into the application design and conduct a comparative study of 32 users, and explore the immersive user experience of contextual and non-contextual, physical and non-physical, natural-interaction and screen-touch. The quantitative and qualitative results show that virtual objects have a stronger presence and users are more immersed in the environment when there is a contextual and physical relationship and users can interact naturally. This study is the first step to having a better understanding of the characteristics that contribute to an immersive experience and how they affect human perception and the presence of virtual objects. We hope to provide design insights for MAR applications based on these results.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129611124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most efforts on Augmented Reality (AR) remote collaboration have been devoted to on-site individuals in need of assistance. Yet, since remote experts do not have access to the local environment, it is also important to explore how different methods affect the situation understanding and interaction. This work describes a user study with 16 participants aimed at comparing how remote individuals interact with standard (C1) and large-scale displays (C2) while providing assistance during a remote maintenance scenario. Results suggest condition C2 was preferred by the majority of participants, being considered more useful to analyze important details of the task context, as well as more easily create content.
{"title":"Does Size Matter? Exploring how Standard and Large-Scale Displays affect Off-site Experts during AR-Remote Collaboration","authors":"Bernardo Marques, S. Silva, Paulo Dias, B. Santos","doi":"10.1145/3531073.3534473","DOIUrl":"https://doi.org/10.1145/3531073.3534473","url":null,"abstract":"Most efforts on Augmented Reality (AR) remote collaboration have been devoted to on-site individuals in need of assistance. Yet, since remote experts do not have access to the local environment, it is also important to explore how different methods affect the situation understanding and interaction. This work describes a user study with 16 participants aimed at comparing how remote individuals interact with standard (C1) and large-scale displays (C2) while providing assistance during a remote maintenance scenario. Results suggest condition C2 was preferred by the majority of participants, being considered more useful to analyze important details of the task context, as well as more easily create content.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130687139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}