Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00033
D. Aschenbrenner, D. V. Tol, Z. Rusák, C. Werker
This paper proposes to use Virtual Reality scenarios to explore the reaction of stakeholders within an innovation process in the context of the introduction of robots working in close collaboration with users. The goal is to design the system upfront in such a way, that it is not perceived as a threat to the worker or his/her job. Within the responsible research and innovation approach, the introduction of new technology needs to be accompanied by a careful investigation of the thoughts and feelings of all stakeholders. Especially workers who are currently not working with robots but their workspace is currently undergoing an Industry 4.0 driven transformation, experience fear, that this new technology will make their jobs redundant. On the other hand, it can be observed, that successful robot interaction processes, on the one hand, increase the overall productivity, but also can enhance human well-being. The feeling of “teamwork” with the artificial intelligence entity can develop to be equally positive and motivating. To be able to design future workspaces which will result in a “teamwork” perception instead of the “fear” perception, the use of VR can be applied.
{"title":"Using Virtual Reality for scenario-based Responsible Research and Innovation approach for Human Robot Co-production","authors":"D. Aschenbrenner, D. V. Tol, Z. Rusák, C. Werker","doi":"10.1109/AIVR50618.2020.00033","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00033","url":null,"abstract":"This paper proposes to use Virtual Reality scenarios to explore the reaction of stakeholders within an innovation process in the context of the introduction of robots working in close collaboration with users. The goal is to design the system upfront in such a way, that it is not perceived as a threat to the worker or his/her job. Within the responsible research and innovation approach, the introduction of new technology needs to be accompanied by a careful investigation of the thoughts and feelings of all stakeholders. Especially workers who are currently not working with robots but their workspace is currently undergoing an Industry 4.0 driven transformation, experience fear, that this new technology will make their jobs redundant. On the other hand, it can be observed, that successful robot interaction processes, on the one hand, increase the overall productivity, but also can enhance human well-being. The feeling of “teamwork” with the artificial intelligence entity can develop to be equally positive and motivating. To be able to design future workspaces which will result in a “teamwork” perception instead of the “fear” perception, the use of VR can be applied.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117051027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00047
J. Doswell, Justin Johnson, Brandon Brockington, Aaron Mosby, Arthur Chinery
The Juxtopia® Open-Wear research team collaborated with the Maryland Fire & Rescue Institute (MFRI) to test how the Juxtopia® artificial intelligent (AI) wearable augmented reality (AR) intervention may better deliver a hands-free clinical training intervention to firefighter Emergency Medical Technicians (EMT) and prepare them for effective response to hazardous material (HAZMAT) incidences. During a controlled study, human subjects participated in a minimal risk research (i.e., both as victims or caregivers) in which firefighter EMTs participated in a simulated training exercise that mimicked their real-world operations. During the study, there were two testing days. Day one included (10) victims and (20) caregivers who participated in a full day of training and familiarized themselves with wearable AR Head Mounted Display (HMD) and a Juxtopia® Virtual Tutor (JVT) software application. The results demonstrated that an AI instructor enabled AR system can train EMTs in core clinical skills for effective HAZMAT response.
{"title":"Juxtopia® CAMMRAD PREPARE: Wearable AI-AR Platform for Clinical Training Emergency First Response Teams","authors":"J. Doswell, Justin Johnson, Brandon Brockington, Aaron Mosby, Arthur Chinery","doi":"10.1109/AIVR50618.2020.00047","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00047","url":null,"abstract":"The Juxtopia® Open-Wear research team collaborated with the Maryland Fire & Rescue Institute (MFRI) to test how the Juxtopia® artificial intelligent (AI) wearable augmented reality (AR) intervention may better deliver a hands-free clinical training intervention to firefighter Emergency Medical Technicians (EMT) and prepare them for effective response to hazardous material (HAZMAT) incidences. During a controlled study, human subjects participated in a minimal risk research (i.e., both as victims or caregivers) in which firefighter EMTs participated in a simulated training exercise that mimicked their real-world operations. During the study, there were two testing days. Day one included (10) victims and (20) caregivers who participated in a full day of training and familiarized themselves with wearable AR Head Mounted Display (HMD) and a Juxtopia® Virtual Tutor (JVT) software application. The results demonstrated that an AI instructor enabled AR system can train EMTs in core clinical skills for effective HAZMAT response.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126180239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00057
Justus Robertson, R. E. Cardona-Rivera, R. Young
A key part of managing a player’s virtual reality experience is ensuring that the environment behaves consistently to the player’s interaction. In some instances, however, it is important to change how the world behaves–i.e. the world’s simulation rules or mechanics–because doing so preserves the virtual environment’s intended quality. Mechanics changes must be done carefully; if too overt, they may be perceivable and potentially thwart a player’s sense of presence or agency.This paper reports the result of a study, which demonstrates the widely-held but heretofore-untested belief that changing an environment’s mechanics without considering what the player knows is visible to the player. The study’s findings motivate the paper’s second contribution: an automated method to perform invisible dynamic mechanics adjustment, which affords shifting a game’s previously-established mechanics in a manner that is not perceivably inconsistent to players. This method depends on a knowledge-tracking strategy and two such strategies are presented: (1) a conservative one, relevant to a wide variety of virtual environments, and (2) a more nuanced one, relevant to environments that will be experienced via head-mounted virtual reality displays. The paper concludes with a variety of design-centered considerations for the use of this artificial intelligence system within virtual reality.
{"title":"Invisible Dynamic Mechanic Adjustment in Virtual Reality Games","authors":"Justus Robertson, R. E. Cardona-Rivera, R. Young","doi":"10.1109/AIVR50618.2020.00057","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00057","url":null,"abstract":"A key part of managing a player’s virtual reality experience is ensuring that the environment behaves consistently to the player’s interaction. In some instances, however, it is important to change how the world behaves–i.e. the world’s simulation rules or mechanics–because doing so preserves the virtual environment’s intended quality. Mechanics changes must be done carefully; if too overt, they may be perceivable and potentially thwart a player’s sense of presence or agency.This paper reports the result of a study, which demonstrates the widely-held but heretofore-untested belief that changing an environment’s mechanics without considering what the player knows is visible to the player. The study’s findings motivate the paper’s second contribution: an automated method to perform invisible dynamic mechanics adjustment, which affords shifting a game’s previously-established mechanics in a manner that is not perceivably inconsistent to players. This method depends on a knowledge-tracking strategy and two such strategies are presented: (1) a conservative one, relevant to a wide variety of virtual environments, and (2) a more nuanced one, relevant to environments that will be experienced via head-mounted virtual reality displays. The paper concludes with a variety of design-centered considerations for the use of this artificial intelligence system within virtual reality.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125340618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00032
Soenke Ziesche, Roman V. Yampolskiy
It has been shown that an important criterion for human happiness and longevity is what is expressed by the Japanese concept of ikigai, which means “reason or purpose to live”. In the course of their lives humans usually search for their individual ikigai, ideally find it and hence devote time to it. As it is widely expected that both AI and XR will be increasingly disruptive of our known daily time use schedule, this will likely also have an impact on the space of potential ikigai. Since ikigai constitutes a vital component of the lives of humans, these consequences for ikigai have to be examined towards both ethical human enhancement as well as ikigai-friendly AI. In this paper the term “i-risk” is introduced for undesirable scenarios in which humans and potentially also other minds are deprived of the pursuit of their individual ikigai. This paper outlines ikigai-related challenges as well as desiderata for the three categories XR/human enhancement, AI safety and AI welfare.
{"title":"Introducing the concept of ikigai to the ethics of AI and of human enhancements","authors":"Soenke Ziesche, Roman V. Yampolskiy","doi":"10.1109/AIVR50618.2020.00032","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00032","url":null,"abstract":"It has been shown that an important criterion for human happiness and longevity is what is expressed by the Japanese concept of ikigai, which means “reason or purpose to live”. In the course of their lives humans usually search for their individual ikigai, ideally find it and hence devote time to it. As it is widely expected that both AI and XR will be increasingly disruptive of our known daily time use schedule, this will likely also have an impact on the space of potential ikigai. Since ikigai constitutes a vital component of the lives of humans, these consequences for ikigai have to be examined towards both ethical human enhancement as well as ikigai-friendly AI. In this paper the term “i-risk” is introduced for undesirable scenarios in which humans and potentially also other minds are deprived of the pursuit of their individual ikigai. This paper outlines ikigai-related challenges as well as desiderata for the three categories XR/human enhancement, AI safety and AI welfare.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122141554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00072
Caglar Yildirim
Cybersickness is an unpleasant side effect of exposure to a virtual reality (VR) experience and refers to such physiological repercussions as nausea and dizziness triggered in response to VR exposure. Given the debilitating effect of cybersickness on the user experience in VR, academic interest in the automatic detection of cybersickness from physiological measurements has crested in recent years. Electroencephalography (EEG) has been extensively used to capture changes in electrical activity in the brain and to automatically classify cybersickness from brainwaves using a variety of machine learning algorithms. Recent advances in deep learning (DL) algorithms and increasing availability of computational resources for DL have paved the way for a new area of research into the application of DL frameworks to EEGbased detection of cybersickness. Accordingly, this review involved a systematic review of the peer-reviewed papers concerned with the application of DL frameworks to the classification of cybersickness from EEG signals. The relevant literature was identified through exhaustive database searches, and the papers were scrutinized with respect to experimental protocols for data collection, data preprocessing, and DL architectures. The review revealed a limited number of studies in this nascent area of research and showed that the DL frameworks reported in these studies (i.e., DNN, CNN, and RNN) could classify cybersickness with an average accuracy rate of 93%. This review provides a summary of the trends and issues in the application of DL frameworks to the EEG-based detection of cybersickness, with some guidelines for future research.
{"title":"A Review of Deep Learning Approaches to EEG-Based Classification of Cybersickness in Virtual Reality","authors":"Caglar Yildirim","doi":"10.1109/AIVR50618.2020.00072","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00072","url":null,"abstract":"Cybersickness is an unpleasant side effect of exposure to a virtual reality (VR) experience and refers to such physiological repercussions as nausea and dizziness triggered in response to VR exposure. Given the debilitating effect of cybersickness on the user experience in VR, academic interest in the automatic detection of cybersickness from physiological measurements has crested in recent years. Electroencephalography (EEG) has been extensively used to capture changes in electrical activity in the brain and to automatically classify cybersickness from brainwaves using a variety of machine learning algorithms. Recent advances in deep learning (DL) algorithms and increasing availability of computational resources for DL have paved the way for a new area of research into the application of DL frameworks to EEGbased detection of cybersickness. Accordingly, this review involved a systematic review of the peer-reviewed papers concerned with the application of DL frameworks to the classification of cybersickness from EEG signals. The relevant literature was identified through exhaustive database searches, and the papers were scrutinized with respect to experimental protocols for data collection, data preprocessing, and DL architectures. The review revealed a limited number of studies in this nascent area of research and showed that the DL frameworks reported in these studies (i.e., DNN, CNN, and RNN) could classify cybersickness with an average accuracy rate of 93%. This review provides a summary of the trends and issues in the application of DL frameworks to the EEG-based detection of cybersickness, with some guidelines for future research.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130494124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00065
Daniel Hawes, A. Arya
Recent research indicates that a majority of postsecondary students in North America “felt overwhelming anxiety” in the past few years, an effect that is negatively impacting their academic performance and overall well-being. Building on recent technology and cognitive priming research, we propose a theoretical framework and technology solution to address the student anxiety challenge using technologybased priming. As an initial test of our theoretical framework, this study aims to test the effectiveness of Virtual Reality gaming applications to reduce anxiety and increase cognitive bandwidth. In this preliminary, within-subjects study design, N=10, the primed participants showed a marked increase in cognitive test performance subsequent to the priming activity compared to the non-primed test session. The results also showed that highly anxious subjects derived more benefit from the priming activity than less anxious subjects.
{"title":"Assessing the Effectiveness of Virtual Reality Gaming to Reduce Anxiety and Increase Cognitive Bandwidth","authors":"Daniel Hawes, A. Arya","doi":"10.1109/AIVR50618.2020.00065","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00065","url":null,"abstract":"Recent research indicates that a majority of postsecondary students in North America “felt overwhelming anxiety” in the past few years, an effect that is negatively impacting their academic performance and overall well-being. Building on recent technology and cognitive priming research, we propose a theoretical framework and technology solution to address the student anxiety challenge using technologybased priming. As an initial test of our theoretical framework, this study aims to test the effectiveness of Virtual Reality gaming applications to reduce anxiety and increase cognitive bandwidth. In this preliminary, within-subjects study design, N=10, the primed participants showed a marked increase in cognitive test performance subsequent to the priming activity compared to the non-primed test session. The results also showed that highly anxious subjects derived more benefit from the priming activity than less anxious subjects.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130555406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00046
Elena Molina, A. Jerez, Núria Pelechano Gómez
Immersive virtual environments have proven to be a plausible platform to be used by multiple disciplines to simulate different types of scenarios and situations at a low cost. When participants are immersed in a virtual environment experience presence, they are more likely to behave as if they were in the real world. Improving the level of realism should provide a more compelling scenario so that users will experience higher levels of presence, and thus be more likely to behave as if they were in the real world. This paper presents preliminary results of an experiment in which participants navigate through two versions of the same scenario with different levels of realism of both the environment and the avatars. Our current results, from a between subjects experiment, show that the reported levels of quality in the visualization are not significantly different, which means that other aspects of the virtual environment and/or avatars must be taken into account in order to improve the perceived level of realism.
{"title":"Avatars rendering and its effect on perceived realism in Virtual Reality","authors":"Elena Molina, A. Jerez, Núria Pelechano Gómez","doi":"10.1109/AIVR50618.2020.00046","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00046","url":null,"abstract":"Immersive virtual environments have proven to be a plausible platform to be used by multiple disciplines to simulate different types of scenarios and situations at a low cost. When participants are immersed in a virtual environment experience presence, they are more likely to behave as if they were in the real world. Improving the level of realism should provide a more compelling scenario so that users will experience higher levels of presence, and thus be more likely to behave as if they were in the real world. This paper presents preliminary results of an experiment in which participants navigate through two versions of the same scenario with different levels of realism of both the environment and the avatars. Our current results, from a between subjects experiment, show that the reported levels of quality in the visualization are not significantly different, which means that other aspects of the virtual environment and/or avatars must be taken into account in order to improve the perceived level of realism.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121723705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00079
M. Gruosso, N. Capece, U. Erra, Francesco Angiolillo
Hand tracking is an essential component of computer graphics and human-computer interaction applications. The use of RGB camera without specific hardware and sensors (e.g., depth cameras) allows developing solutions for a plethora of devices and platforms. Although various methods were proposed, hand tracking from a single RGB camera is still a challenging research area due to occlusions, complex backgrounds, and various hand poses and gestures. We present a mobile application for 2D hand tracking from RGB images captured by the smartphone camera. The images are processed by a deep neural network, modified specifically to tackle this task and run on mobile devices, looking for a compromise between performance and computational time. Network output is used to show a 2D skeleton on the user’s hand. We tested our system on several scenarios, showing an interactive hand tracking level and achieving promising results in the case of variable brightness and backgrounds and small occlusions.
{"title":"A Preliminary Investigation into a Deep Learning Implementation for Hand Tracking on Mobile Devices","authors":"M. Gruosso, N. Capece, U. Erra, Francesco Angiolillo","doi":"10.1109/AIVR50618.2020.00079","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00079","url":null,"abstract":"Hand tracking is an essential component of computer graphics and human-computer interaction applications. The use of RGB camera without specific hardware and sensors (e.g., depth cameras) allows developing solutions for a plethora of devices and platforms. Although various methods were proposed, hand tracking from a single RGB camera is still a challenging research area due to occlusions, complex backgrounds, and various hand poses and gestures. We present a mobile application for 2D hand tracking from RGB images captured by the smartphone camera. The images are processed by a deep neural network, modified specifically to tackle this task and run on mobile devices, looking for a compromise between performance and computational time. Network output is used to show a 2D skeleton on the user’s hand. We tested our system on several scenarios, showing an interactive hand tracking level and achieving promising results in the case of variable brightness and backgrounds and small occlusions.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132576501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00067
M. Kegeleers, Rafael Bidarra
Most content creation applications currently in use are conventional PC applications with visualisation on a 2D screen and indirect interaction, e.g. through mouse and keyboard. Augmented Reality (AR) is a medium that can provide actual 3D visualisation and more hands-on interaction for these applications, due to its technology adding virtual elements to a real-world environment. We explored how AR can be used for story authoring, a specific type of content creation, and investigated how both types of existing AR interfaces, tangible and touch-less, can be combined in a useful way in that context [1]. The Story ARtist application was developed to evaluate the designed interactions and AR visualisation for story authoring. It features a tabletop environment to dynamically visualise the story authoring elements, augmented by the 3D space that AR provides. Story authoring is kept simple, with a linear plot point structure focused on core story elements like actions, characters and objects.
{"title":"Story ARtist","authors":"M. Kegeleers, Rafael Bidarra","doi":"10.1109/AIVR50618.2020.00067","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00067","url":null,"abstract":"Most content creation applications currently in use are conventional PC applications with visualisation on a 2D screen and indirect interaction, e.g. through mouse and keyboard. Augmented Reality (AR) is a medium that can provide actual 3D visualisation and more hands-on interaction for these applications, due to its technology adding virtual elements to a real-world environment. We explored how AR can be used for story authoring, a specific type of content creation, and investigated how both types of existing AR interfaces, tangible and touch-less, can be combined in a useful way in that context [1]. The Story ARtist application was developed to evaluate the designed interactions and AR visualisation for story authoring. It features a tabletop environment to dynamically visualise the story authoring elements, augmented by the 3D space that AR provides. Story authoring is kept simple, with a linear plot point structure focused on core story elements like actions, characters and objects.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124393209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00038
Onur Yildirim, Catlin Pidel, Mirjam West
Virtual Reality (VR) affords the opportunity to experience things that could be cost-prohibitive, dangerous, or even impossible in real life. One of these impossibilities is virtual time traveling, a way to be fully immersed in a simulation of the past or future. Our team used user-centered design methodologies and expert guidance to create a VR scenario exploring what the future of urban transportation could look like. Focusing on a sharing economy (e.g. mobility as a service), the experience uses existing technologies as well as science-based theoretical concepts to create an immersive, interactive simulation of how daily transit habits could evolve. A subsequent user study explored how VR can influence someone’s attitudes and perceptions towards these sorts of mobility concepts and technologies. Our results show that VR is an effective way to quickly and intuitively explain complex concepts and may also play a role in broadening user perspectives.
{"title":"Future Mobility Solutions: A Use Case for Understanding How VR Influences User Perception","authors":"Onur Yildirim, Catlin Pidel, Mirjam West","doi":"10.1109/AIVR50618.2020.00038","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00038","url":null,"abstract":"Virtual Reality (VR) affords the opportunity to experience things that could be cost-prohibitive, dangerous, or even impossible in real life. One of these impossibilities is virtual time traveling, a way to be fully immersed in a simulation of the past or future. Our team used user-centered design methodologies and expert guidance to create a VR scenario exploring what the future of urban transportation could look like. Focusing on a sharing economy (e.g. mobility as a service), the experience uses existing technologies as well as science-based theoretical concepts to create an immersive, interactive simulation of how daily transit habits could evolve. A subsequent user study explored how VR can influence someone’s attitudes and perceptions towards these sorts of mobility concepts and technologies. Our results show that VR is an effective way to quickly and intuitively explain complex concepts and may also play a role in broadening user perspectives.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128654570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}