We developed a novel interaction technique that allows virtual reality (VR) users to experience “weight” when hefting virtual, weightless objects. With this technique the perception of weight is evoked via constraints on the speed with which objects can be lifted. When hefted, heavier virtual objects move slower than lighter virtual objects. If lifters move faster than the lifted object, the object will fall. This constraint causes lifters to move slowly when lifting heavy objects. In two studies we showed that the size-weight illusion (SWI) is evoked when this technique is employed. The SWI occurs when two items of identical weight and different size are lifted and the smaller item is perceived as heavier than the larger item. The persistence of this illusion in VR indicates that participants bring their real-world knowledge of the relationship between size and weight to their virtual experience, and suggests that our interaction technique succeeds in making the visible tangible.
{"title":"Making the Visual Tangible: Substituting Lifting Speed Limits for Object Weight in VR","authors":"Veronica Weser;Dennis R. Proffitt","doi":"10.1162/pres_a_00319","DOIUrl":"https://doi.org/10.1162/pres_a_00319","url":null,"abstract":"<para>We developed a novel interaction technique that allows virtual reality (VR) users to experience “weight” when hefting virtual, weightless objects. With this technique the perception of weight is evoked via constraints on the speed with which objects can be lifted. When hefted, heavier virtual objects move slower than lighter virtual objects. If lifters move faster than the lifted object, the object will fall. This constraint causes lifters to move slowly when lifting heavy objects. In two studies we showed that the size-weight illusion (SWI) is evoked when this technique is employed. The SWI occurs when two items of identical weight and different size are lifted and the smaller item is perceived as heavier than the larger item. The persistence of this illusion in VR indicates that participants bring their real-world knowledge of the relationship between size and weight to their virtual experience, and suggests that our interaction technique succeeds in making the visible tangible.</para>","PeriodicalId":101038,"journal":{"name":"Presence","volume":"27 1","pages":"68-79"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/pres_a_00319","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50225422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philipp Wintersberger;Anna-Katharina Frison;Andreas Riener;Tamara von Sawitzky
Lack of trust in or acceptance of technology are some of the fundamental problems that might prevent the dissemination of automated driving. Technological advances, such as augmented reality aids like full-sized windshield displays or AR contact lenses, could be of help to provide a better system understanding to the user. In this work, we picked up on the question of whether augmented reality assistance has the potential to increase user acceptance and trust by communicating system decisions (i.e., transparent system behavior). To prove our hypothesis, we conducted two driving simulator studies to investigate the benefit of scenario augmentation in fully automated driving—first in normal (N=26) and then in rearward viewing (N=18) direction. Quantitative results indicate that the augmentation of traffic objects/participants otherwise invisible (e.g., due to dense fog), or the presentation of upcoming driving maneuvers while sitting backwards, is a feasible approach to increase user acceptance and trust. Results are further backed by qualitative findings from semistructured interviews and UX curves (a method to retrospectively report experience over time). We conclude that the application of augmented reality, in particular with the emergence of more powerful, lightweight, or integrated devices, is a good opportunity with high potential for automated driving.
{"title":"Fostering User Acceptance and Trust in Fully Automated Vehicles: Evaluating the Potential of Augmented Reality","authors":"Philipp Wintersberger;Anna-Katharina Frison;Andreas Riener;Tamara von Sawitzky","doi":"10.1162/pres_a_00320","DOIUrl":"https://doi.org/10.1162/pres_a_00320","url":null,"abstract":"<para>Lack of trust in or acceptance of technology are some of the fundamental problems that might prevent the dissemination of automated driving. Technological advances, such as augmented reality aids like full-sized windshield displays or AR contact lenses, could be of help to provide a better system understanding to the user. In this work, we picked up on the question of whether augmented reality assistance has the potential to increase user acceptance and trust by communicating system decisions (i.e., transparent system behavior). To prove our hypothesis, we conducted two driving simulator studies to investigate the benefit of scenario augmentation in fully automated driving—first in normal (<inline-formula><mml:math><mml:mrow><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>26</mml:mn></mml:mrow></mml:math></inline-formula>) and then in rearward viewing (<inline-formula><mml:math><mml:mrow><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>18</mml:mn></mml:mrow></mml:math></inline-formula>) direction. Quantitative results indicate that the augmentation of traffic objects/participants otherwise invisible (e.g., due to dense fog), or the presentation of upcoming driving maneuvers while sitting backwards, is a feasible approach to increase user acceptance and trust. Results are further backed by qualitative findings from semistructured interviews and UX curves (a method to retrospectively report experience over time). We conclude that the application of augmented reality, in particular with the emergence of more powerful, lightweight, or integrated devices, is a good opportunity with high potential for automated driving.</para>","PeriodicalId":101038,"journal":{"name":"Presence","volume":"27 1","pages":"46-62"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/pres_a_00320","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50351704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
According to statistics published in IEEE Spectrum (Prins, Gunkel, Stokking, & Niamut, 2018), the total revenue for virtual reality hardware and software will grow 200-fold from the 2014 basic value by 2020. According to the latest update of the IDC Worldwide Semiannual Augmented and Virtual Reality Spending Guide, the automotive domain will be one beneficiary of augmented reality (AR) and virtual reality (VR) technology with spending expected to reach $12.8 billion by 2020. Another 2017 study by the German Association for the Digital Economy (BVDW) and Accenture (Lucas, 2017) outlines the opportunities and challenges of AR and VR in the automotive industry. AR/VR technology will increase road safety, bring intuitive activities to driving, and finally enhance driving experience. AR/VR technology may also help on the transition toward automated driving. AR head-up displays (HUDs) may soon overlay 3D navigation instructions onto road geometry,
{"title":"Special Issue of Presence: Virtual and Augmented Reality Virtual and Augmented Reality for Autonomous Driving and Intelligent Vehicles: Guest Editors' Introduction","authors":"Andreas Riener;Joe Gabbard;Mohan Trivedi","doi":"10.1162/pres_e_00323","DOIUrl":"https://doi.org/10.1162/pres_e_00323","url":null,"abstract":"According to statistics published in IEEE Spectrum (Prins, Gunkel, Stokking, & Niamut, 2018), the total revenue for virtual reality hardware and software will grow 200-fold from the 2014 basic value by 2020. According to the latest update of the IDC Worldwide Semiannual Augmented and Virtual Reality Spending Guide, the automotive domain will be one beneficiary of augmented reality (AR) and virtual reality (VR) technology with spending expected to reach $12.8 billion by 2020. Another 2017 study by the German Association for the Digital Economy (BVDW) and Accenture (Lucas, 2017) outlines the opportunities and challenges of AR and VR in the automotive industry. AR/VR technology will increase road safety, bring intuitive activities to driving, and finally enhance driving experience. AR/VR technology may also help on the transition toward automated driving. AR head-up displays (HUDs) may soon overlay 3D navigation instructions onto road geometry,","PeriodicalId":101038,"journal":{"name":"Presence","volume":"27 1","pages":"i-iv"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/pres_e_00323","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50351813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The First Original Copy refers to any first true 3D facsimile of a digitally reproduced physical object. The notion of a copy being the first and original implies that it is unique and therefore the approach used for managing rights and ownership influences its value. Whilst virtual goods traded within virtual worlds are subject to rules and policies, the production of digital objects in the real world does not have a mechanism from which rarity and uniqueness can be guaranteed. Digital copies are subject to further copying and thus, the value of even an exact copy can never be perceived to be equivalent to its original. Through what means can we imbue 3D reproductions of cultural objects with value that is at least asymptotic to their originals? There may be a candidate solution. Discussed in this article is a possible approach for resolving a long-term issue related to authenticity, ownership, perpetuity, and the quantitative tracking of value associated with 3D copies. Blockchains essentially bring the systemic management of virtual objects within virtual worlds into the real world. This forum article examines the candidate solution by answering the questions above, and discusses the issues associated with the concept of the First Original Copy.
{"title":"The First Original Copy and the Role of Blockchain in the Reproduction of Cultural Heritage","authors":"Eugene Ch'ng","doi":"10.1162/pres_a_00313","DOIUrl":"https://doi.org/10.1162/pres_a_00313","url":null,"abstract":"<para>The First Original Copy refers to any first true 3D facsimile of a digitally reproduced physical object. The notion of a copy being the first and original implies that it is unique and therefore the approach used for managing rights and ownership influences its value. Whilst virtual goods traded within virtual worlds are subject to rules and policies, the production of digital objects in the real world does not have a mechanism from which rarity and uniqueness can be guaranteed. Digital copies are subject to further copying and thus, the value of even an exact copy can never be perceived to be equivalent to its original. Through what means can we imbue 3D reproductions of cultural objects with value that is at least asymptotic to their originals? There may be a candidate solution. Discussed in this article is a possible approach for resolving a long-term issue related to authenticity, ownership, perpetuity, and the quantitative tracking of value associated with 3D copies. Blockchains essentially bring the systemic management of virtual objects within virtual worlds into the real world. This forum article examines the candidate solution by answering the questions above, and discusses the issues associated with the concept of the First Original Copy.</para>","PeriodicalId":101038,"journal":{"name":"Presence","volume":"27 1","pages":"151-162"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/pres_a_00313","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50225426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sanna M. Pampel;Katherine Lamb;Gary Burnett;Lee Skrypchuk;Chrisminder Hare;Alex Mouzakitis
Although drivers gain experience with age, many older drivers are faced with age-related deteriorations that can lead to a higher crash risk. Head-Up Displays (HUDs) have been linked to significant improvements in driving performance for older drivers by tackling issues related to aging. For this study, two Augmented Reality (AR) HUD virtual car navigation solutions were tested (one screen-fixed, one world-fixed), aiming to improve navigation performance and reduce the discrepancy between younger and older drivers by aiding the appropriate allocation of attention and easing interpretation of navigational information. Twenty-five participants (12 younger, 13 older) undertook a series of drives within a medium-fidelity simulator with three different navigational conditions (virtual car HUD, static HUD arrow graphic, and traditional head-down satnav). Results showed that older drivers tended to achieve navigational success rates similar to the younger group, but experienced higher objective mental workload. Solely for the static HUD arrow graphic, differences in most workload questionnaire items and objective workload between younger and older participants were not significant. The virtual car led to improved navigation performance of all drivers, compared to the other systems. Hence, both AR HUD systems show potential for older drivers, which needs to be further investigated in a real-world driving context.
{"title":"An Investigation of the Effects of Driver Age When Using Novel Navigation Systems in a Head-Up Display","authors":"Sanna M. Pampel;Katherine Lamb;Gary Burnett;Lee Skrypchuk;Chrisminder Hare;Alex Mouzakitis","doi":"10.1162/pres_a_00317","DOIUrl":"https://doi.org/10.1162/pres_a_00317","url":null,"abstract":"<para>Although drivers gain experience with age, many older drivers are faced with age-related deteriorations that can lead to a higher crash risk. Head-Up Displays (HUDs) have been linked to significant improvements in driving performance for older drivers by tackling issues related to aging. For this study, two Augmented Reality (AR) HUD virtual car navigation solutions were tested (one screen-fixed, one world-fixed), aiming to improve navigation performance and reduce the discrepancy between younger and older drivers by aiding the appropriate allocation of attention and easing interpretation of navigational information. Twenty-five participants (12 younger, 13 older) undertook a series of drives within a medium-fidelity simulator with three different navigational conditions (virtual car HUD, static HUD arrow graphic, and traditional head-down satnav). Results showed that older drivers tended to achieve navigational success rates similar to the younger group, but experienced higher objective mental workload. Solely for the static HUD arrow graphic, differences in most workload questionnaire items and objective workload between younger and older participants were not significant. The virtual car led to improved navigation performance of all drivers, compared to the other systems. Hence, both AR HUD systems show potential for older drivers, which needs to be further investigated in a real-world driving context.</para>","PeriodicalId":101038,"journal":{"name":"Presence","volume":"27 1","pages":"32-45"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/pres_a_00317","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50351703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew L. Kun;Hidde van der Meulen;Christian P. Janssen
We report on an experiment on the distracting effects of in-car conversations through augmented-reality glasses. Previous research showed that in-car phone conversations can be distracting, but that the distraction might be reduced if the remote caller receives visual information about the driving context. However, what happens if such video sharing becomes bidirectional? The recent introduction of commercial augmented-reality glasses in particular might allow drivers to engage in video-supported conversations while driving. We investigate how distracting such video-based conversations are in an experiment. Our participants operated a simulated vehicle, while also playing a conversational game (Taboo) with a remote conversant. The driver either only heard the remote conversant (speech-only condition), or was also able to see the remote person in a virtual window that was presented through augmented reality (video call condition). Results show that our participants did not spend time looking at the video of the remote conversant. We hypothesize that this was due to the fact that in our experiment participants had to turn their head to get a full view of the virtual window. Our results imply that we need further studies on the effects of augmented reality on the visual attention of the driver, before the technology is used on the road.
{"title":"Calling while Driving Using Augmented Reality: Blessing or Curse?","authors":"Andrew L. Kun;Hidde van der Meulen;Christian P. Janssen","doi":"10.1162/pres_a_00316","DOIUrl":"https://doi.org/10.1162/pres_a_00316","url":null,"abstract":"<para>We report on an experiment on the distracting effects of in-car conversations through augmented-reality glasses. Previous research showed that in-car phone conversations can be distracting, but that the distraction might be reduced if the remote caller receives visual information about the driving context. However, what happens if such video sharing becomes bidirectional? The recent introduction of commercial augmented-reality glasses in particular might allow drivers to engage in video-supported conversations while driving. We investigate how distracting such video-based conversations are in an experiment. Our participants operated a simulated vehicle, while also playing a conversational game (Taboo) with a remote conversant. The driver either only heard the remote conversant (speech-only condition), or was also able to see the remote person in a virtual window that was presented through augmented reality (video call condition). Results show that our participants did not spend time looking at the video of the remote conversant. We hypothesize that this was due to the fact that in our experiment participants had to turn their head to get a full view of the virtual window. Our results imply that we need further studies on the effects of augmented reality on the visual attention of the driver, before the technology is used on the road.</para>","PeriodicalId":101038,"journal":{"name":"Presence","volume":"27 1","pages":"1-14"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/pres_a_00316","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50351814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article discusses factors related to simulation sickness in virtual reality driving simulations with head-mounted displays. Simulation sickness is a well-known phenomenon that has physiological effects on users, such as disorientation, headache, and nausea. There are three major theories why simulation sickness arises. Previous research on this phenomenon has mostly concentrated on driving or flying simulators with standard computer displays. It is, therefore, possible to conclude that any simulated environment could have such an effect, and virtual reality should not be considered an exception to such problems. While virtual reality has had and will continue to have a positive impact on the development and testing of new automotive interior concepts, simulation sickness is a significant drawback. Despite the advances in technology, discomfort from using head-mounted displays has yet to be resolved. A review of these displays in the context of virtual reality driving applications over the recent years will be presented. Moreover, characterization and comparison of approaches to mitigate simulation sickness will be given in the text. Concluding suggestions for future work on the correlation between simulation sickness and a virtual driving environment will be provided.
{"title":"A Survey on Simulation Sickness in Driving Applications with Virtual Reality Head-Mounted Displays","authors":"Stanislava Rangelova;Elisabeth Andre","doi":"10.1162/pres_a_00318","DOIUrl":"https://doi.org/10.1162/pres_a_00318","url":null,"abstract":"<para>This article discusses factors related to simulation sickness in virtual reality driving simulations with head-mounted displays. Simulation sickness is a well-known phenomenon that has physiological effects on users, such as disorientation, headache, and nausea. There are three major theories why simulation sickness arises. Previous research on this phenomenon has mostly concentrated on driving or flying simulators with standard computer displays. It is, therefore, possible to conclude that any simulated environment could have such an effect, and virtual reality should not be considered an exception to such problems. While virtual reality has had and will continue to have a positive impact on the development and testing of new automotive interior concepts, simulation sickness is a significant drawback. Despite the advances in technology, discomfort from using head-mounted displays has yet to be resolved. A review of these displays in the context of virtual reality driving applications over the recent years will be presented. Moreover, characterization and comparison of approaches to mitigate simulation sickness will be given in the text. Concluding suggestions for future work on the correlation between simulation sickness and a virtual driving environment will be provided.</para>","PeriodicalId":101038,"journal":{"name":"Presence","volume":"27 1","pages":"15-31"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/pres_a_00318","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50351815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract An incident commander (IC) is expected to take command in any incident to mitigate consequences for humans, property, and the environment. To prepare for this, practice-based training in realistic simulated situations is necessary. Usually this is conducted in live simulation (LS) at dedicated (physical) training grounds or in virtual simulation (VS) situations at training centers, where all participants are present at the same geographical space. COVID-19-induced restrictions on gathering of people motivated the development and use of remote virtual simulation (RVS) solutions. This article aims to provide an increased understanding of the implementation of RVS in the education of Fire Service ICs in Sweden. Data from observations, questionnaires, and interviews were collected during an RVS examination of two IC classes (43 participants) following an initial pilot study (8 participants). Experienced training values, presence, and performance were investigated. The results indicated that students experienced higher presence in RVS, compared with previous VS studies. This is likely due to the concentration of visual attention to the virtual environment and well-acted verbal counterplay. Although all three training methods (LS, VS, and RVS) are valuable, future research is needed to reveal their respective significant compromises, compared with real-life incidents.
{"title":"Can Remote Virtual Simulation Improve Practice-Based Training? Presence and Performance in Incident Commander Education","authors":"Cecilia Hammar Wijkmark;Ilona Heldal;Maria-Monika Metallinou","doi":"10.1162/pres_a_00346","DOIUrl":"https://doi.org/10.1162/pres_a_00346","url":null,"abstract":"Abstract An incident commander (IC) is expected to take command in any incident to mitigate consequences for humans, property, and the environment. To prepare for this, practice-based training in realistic simulated situations is necessary. Usually this is conducted in live simulation (LS) at dedicated (physical) training grounds or in virtual simulation (VS) situations at training centers, where all participants are present at the same geographical space. COVID-19-induced restrictions on gathering of people motivated the development and use of remote virtual simulation (RVS) solutions. This article aims to provide an increased understanding of the implementation of RVS in the education of Fire Service ICs in Sweden. Data from observations, questionnaires, and interviews were collected during an RVS examination of two IC classes (43 participants) following an initial pilot study (8 participants). Experienced training values, presence, and performance were investigated. The results indicated that students experienced higher presence in RVS, compared with previous VS studies. This is likely due to the concentration of visual attention to the virtual environment and well-acted verbal counterplay. Although all three training methods (LS, VS, and RVS) are valuable, future research is needed to reveal their respective significant compromises, compared with real-life incidents.","PeriodicalId":101038,"journal":{"name":"Presence","volume":"28 ","pages":"127-152"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/6720227/10159601/10159614.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50328191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Designing for augmented reality (AR) applications is difficult and expensive. A rapid system for the early design process of spatial interfaces is required. Previous research has used video for mobile AR design, but this is not extensible to head-mounted AR. AR is an emergent technology with no prior design precedent, requiring designers to allow free speculation or risk the pitfalls of “path dependence.” In this article, a participatory elicitation method we call “spatial informance design” is presented. We found combining “informance design,” “Wizard of Oz,” improvisation, and “paper prototyping,” to be a fast and lightweight solution for ideation of rich designs for spatial interfaces. A study using our method with 11 participants, produced similar and wildly different interface configurations and interactions for an augmented reality email application. Based on our findings, we propose design implications and an evaluation of our method using spatial informance for the design of head-mounted AR applications.
{"title":"A Spatial Informance Design Method to Elicit Early Interface Prototypes for Augmented Reality","authors":"Joe Cowlyn;Nick Dalton","doi":"10.1162/pres_a_00344","DOIUrl":"https://doi.org/10.1162/pres_a_00344","url":null,"abstract":"Abstract Designing for augmented reality (AR) applications is difficult and expensive. A rapid system for the early design process of spatial interfaces is required. Previous research has used video for mobile AR design, but this is not extensible to head-mounted AR. AR is an emergent technology with no prior design precedent, requiring designers to allow free speculation or risk the pitfalls of “path dependence.” In this article, a participatory elicitation method we call “spatial informance design” is presented. We found combining “informance design,” “Wizard of Oz,” improvisation, and “paper prototyping,” to be a fast and lightweight solution for ideation of rich designs for spatial interfaces. A study using our method with 11 participants, produced similar and wildly different interface configurations and interactions for an augmented reality email application. Based on our findings, we propose design implications and an evaluation of our method using spatial informance for the design of head-mounted AR applications.","PeriodicalId":101038,"journal":{"name":"Presence","volume":"28 ","pages":"207-226"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50328194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract A key affordance of virtual reality is the capability of immersive VR to prompt spatial presence resulting from the stereoscopic lenses in the head-mounted display (HMD). We investigated the effect of a stereoscopic view of a game, Cellverse, on users’ perceived spatial presence, knowledge of cells, and learning in three levels of spatial knowledge: route, landmark, and survey knowledge. Fifty-one participants played the game using the same game controllers but with different views; 28 had a stereoscopic view (HMD), and 23 had a non-stereoscopic view (computer monitor). Participants explored a diseased cell for clues to diagnose the disease type and recommend a therapy. We gathered surveys, drawings, and spatial tasks conducted in the game environment to gauge learning. Participants’ spatial knowledge of the cell environment and knowledge of cell concepts improved after gameplay in both conditions. Spatial presence scores in the stereoscopic condition were higher than the non-stereoscopic condition with a large effect size; however, there was no significant difference in levels of spatial knowledge between the two groups. Most drawings showed a change in cell knowledge; yet some participants only changed in spatial knowledge of the cell, and some changed in both cell knowledge and spatial knowledge. Evidence suggests that a stereoscopic view has a significant effect on users’ experience of spatial presence, but that increased presence does not directly translate into spatial learning.
{"title":"Stereoscopic Views Improve Spatial Presence but Not Spatial Learning in VR Games","authors":"Cigdem Uz-Bilgin;Meredith Thompson;Eric Klopfer","doi":"10.1162/pres_a_00349","DOIUrl":"https://doi.org/10.1162/pres_a_00349","url":null,"abstract":"Abstract A key affordance of virtual reality is the capability of immersive VR to prompt spatial presence resulting from the stereoscopic lenses in the head-mounted display (HMD). We investigated the effect of a stereoscopic view of a game, Cellverse, on users’ perceived spatial presence, knowledge of cells, and learning in three levels of spatial knowledge: route, landmark, and survey knowledge. Fifty-one participants played the game using the same game controllers but with different views; 28 had a stereoscopic view (HMD), and 23 had a non-stereoscopic view (computer monitor). Participants explored a diseased cell for clues to diagnose the disease type and recommend a therapy. We gathered surveys, drawings, and spatial tasks conducted in the game environment to gauge learning. Participants’ spatial knowledge of the cell environment and knowledge of cell concepts improved after gameplay in both conditions. Spatial presence scores in the stereoscopic condition were higher than the non-stereoscopic condition with a large effect size; however, there was no significant difference in levels of spatial knowledge between the two groups. Most drawings showed a change in cell knowledge; yet some participants only changed in spatial knowledge of the cell, and some changed in both cell knowledge and spatial knowledge. Evidence suggests that a stereoscopic view has a significant effect on users’ experience of spatial presence, but that increased presence does not directly translate into spatial learning.","PeriodicalId":101038,"journal":{"name":"Presence","volume":"28 ","pages":"227-245"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50328195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}