Scientific knowledge is still limited about the effect of commercial virtual reality content, such as experiences developed for advertising purposes, on individual emotional experience. In addition, even though correlations between emotional responses and perceived sense of presence in virtual reality have often been reported, the relationship remains unclear. Some studies have suggested an important effect of ease of interaction on both emotions and the sense of presence, but only a few studies have scientifically explored this topic. Within this context, this study aimed to: (a) test the effect of inducing positive emotions of a commercial virtual experience developed for the promotion of an urban renewal project, (b) investigate the relationship between positive emotions and the perceived sense of presence, and (c) explore the association between the ease of interaction of the virtual experience with positive emotions and the sense of presence reported by the users. Sixty-one participants were recruited from visitors to the 2017 Milan Design Week “Fuorisalone” event. A survey was administered before and after the experience to collect information about users' demographics, positive emotions, sense of presence, and the ease of interaction with the virtual content. Results give evidence that: (a) the commercial virtual reality experience was able to induce positive emotions; (b) the positive emotions reported by users were associated with the sense of presence experienced in the virtual environment, with a directional effect from emotion to sense of presence; and (c) the easier the interaction, the more the sense of presence and positive emotions were reported by users.
{"title":"What Is the Relationship Among Positive Emotions, Sense of Presence, and Ease of Interaction in Virtual Reality Systems? An On-Site Evaluation of a Commercial Virtual Experience","authors":"Federica Pallavicini;Alessandro Pepe;Ambra Ferrari;Giacomo Garcea;Andrea Zanacchi;Fabrizia Mantovani","doi":"10.1162/pres_a_00325","DOIUrl":"https://doi.org/10.1162/pres_a_00325","url":null,"abstract":"<para>Scientific knowledge is still limited about the effect of commercial virtual reality content, such as experiences developed for advertising purposes, on individual emotional experience. In addition, even though correlations between emotional responses and perceived sense of presence in virtual reality have often been reported, the relationship remains unclear. Some studies have suggested an important effect of ease of interaction on both emotions and the sense of presence, but only a few studies have scientifically explored this topic. Within this context, this study aimed to: (a) test the effect of inducing positive emotions of a commercial virtual experience developed for the promotion of an urban renewal project, (b) investigate the relationship between positive emotions and the perceived sense of presence, and (c) explore the association between the ease of interaction of the virtual experience with positive emotions and the sense of presence reported by the users. Sixty-one participants were recruited from visitors to the 2017 Milan Design Week “Fuorisalone” event. A survey was administered before and after the experience to collect information about users' demographics, positive emotions, sense of presence, and the ease of interaction with the virtual content. Results give evidence that: (a) the commercial virtual reality experience was able to induce positive emotions; (b) the positive emotions reported by users were associated with the sense of presence experienced in the virtual environment, with a directional effect from emotion to sense of presence; and (c) the easier the interaction, the more the sense of presence and positive emotions were reported by users.</para>","PeriodicalId":101038,"journal":{"name":"Presence","volume":"27 2","pages":"183-201"},"PeriodicalIF":0.0,"publicationDate":"2020-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/pres_a_00325","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50323595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since one of the most important aspects of a Fish Tank Virtual Reality (FTVR) system is how well it provides the illusion of depth to users, we present a study that evaluates users' depth perception in FTVR systems using three tasks. The tasks are based on psychological research on human vision and depth judgments common in VR applications. We find that participants do not perform well under motion parallax cues only, when compared with stereo only or a combination of both kinds of cues. Measurements of participants' head movement during each task prove valuable in explaining the experimental findings. We conclude that FTVR users rely on stereopsis for depth perception in FTVR environments more than they do on motion parallax, especially for tasks requiring depth acuity.
{"title":"User Behavior and the Importance of Stereo for Depth Perception in Fish Tank Virtual Reality","authors":"Sirisilp Kongsilp;Matthew N. Dailey","doi":"10.1162/pres_a_00327","DOIUrl":"https://doi.org/10.1162/pres_a_00327","url":null,"abstract":"<para>Since one of the most important aspects of a Fish Tank Virtual Reality (FTVR) system is how well it provides the illusion of depth to users, we present a study that evaluates users' depth perception in FTVR systems using three tasks. The tasks are based on psychological research on human vision and depth judgments common in VR applications. We find that participants do not perform well under motion parallax cues only, when compared with stereo only or a combination of both kinds of cues. Measurements of participants' head movement during each task prove valuable in explaining the experimental findings. We conclude that FTVR users rely on stereopsis for depth perception in FTVR environments more than they do on motion parallax, especially for tasks requiring depth acuity.</para>","PeriodicalId":101038,"journal":{"name":"Presence","volume":"27 2","pages":"206-225"},"PeriodicalIF":0.0,"publicationDate":"2020-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/pres_a_00327","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50323599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jason Orlosky;Konstantinos Theofilis;Kiyoshi Kiyokawa;Yukie Nagai
For robot teleoperators, replicating head motion in a remote environment is often necessary to complete monitoring and face-to-face service tasks. Especially in the case of a stereoscopic camera rig, remote cameras need to be moved to match the operator's head position and optical axes in order to interact with a remote entity, follow targets, or reorient. However, mechanical, computational, and network delay in such teleoperation can cause intersensory conflict, perceptual deficits, and reduction in performance, especially during activities that require head rotation. In this article, we evaluate the effects of view reconstruction on performance and perception for remote monitoring tasks. To do so, we first implemented a panoramic reconstruction method to reduce perceived latency in a humanoid robot, which allows us to compare and contrast latency. Next, we designed a bidirectional remote control system for the robot and set up a series of experiments where participants had to conduct focused head control tasks through the perspective of the robot. This allowed us to compare the effects of latency on head movement, accuracy, and subjective perception of the interface and remote teleoperation. Results showed that panoramic reconstruction significantly improved perception and comfort during teleoperation, but that performance only improved for tasks requiring slower head movements.
{"title":"Effects of Throughput Delay on Perception of Robot Teleoperation and Head Control Precision in Remote Monitoring Tasks","authors":"Jason Orlosky;Konstantinos Theofilis;Kiyoshi Kiyokawa;Yukie Nagai","doi":"10.1162/pres_a_00328","DOIUrl":"https://doi.org/10.1162/pres_a_00328","url":null,"abstract":"<para>For robot teleoperators, replicating head motion in a remote environment is often necessary to complete monitoring and face-to-face service tasks. Especially in the case of a stereoscopic camera rig, remote cameras need to be moved to match the operator's head position and optical axes in order to interact with a remote entity, follow targets, or reorient. However, mechanical, computational, and network delay in such teleoperation can cause intersensory conflict, perceptual deficits, and reduction in performance, especially during activities that require head rotation. In this article, we evaluate the effects of view reconstruction on performance and perception for remote monitoring tasks. To do so, we first implemented a panoramic reconstruction method to reduce perceived latency in a humanoid robot, which allows us to compare and contrast latency. Next, we designed a bidirectional remote control system for the robot and set up a series of experiments where participants had to conduct focused head control tasks through the perspective of the robot. This allowed us to compare the effects of latency on head movement, accuracy, and subjective perception of the interface and remote teleoperation. Results showed that panoramic reconstruction significantly improved perception and comfort during teleoperation, but that performance only improved for tasks requiring slower head movements.</para>","PeriodicalId":101038,"journal":{"name":"Presence","volume":"27 2","pages":"226-241"},"PeriodicalIF":0.0,"publicationDate":"2020-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/pres_a_00328","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50323598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hanseul Jun;Jeremy N. Bailenson;Henry Fuchs;Gordon Wetzstein
The contribution of this article lies in providing the code for a working system running with off-theshelf hardware, but not in advancing theory in computer vision or graphics. The current work presents a system which uses one RGBD camera (Microsoft Kinect v2) to capture people in places, and an AR headset (Microsoft HoloLens) to display the scene. While the fidelity of the system is relatively low compared to others which utilize multiple cameras (i.e., Orts-Escolano et al., 2016), it displays with a high frame rate, has low latency, and is mobile in that it does not require a render computer.
{"title":"An Easy-to-Use Pipeline for an RGBD Camera and an AR Headset","authors":"Hanseul Jun;Jeremy N. Bailenson;Henry Fuchs;Gordon Wetzstein","doi":"10.1162/pres_a_00326","DOIUrl":"https://doi.org/10.1162/pres_a_00326","url":null,"abstract":"The contribution of this article lies in providing the code for a working system running with off-theshelf hardware, but not in advancing theory in computer vision or graphics. The current work presents a system which uses one RGBD camera (Microsoft Kinect v2) to capture people in places, and an AR headset (Microsoft HoloLens) to display the scene. While the fidelity of the system is relatively low compared to others which utilize multiple cameras (i.e., Orts-Escolano et al., 2016), it displays with a high frame rate, has low latency, and is mobile in that it does not require a render computer.","PeriodicalId":101038,"journal":{"name":"Presence","volume":"27 2","pages":"202-205"},"PeriodicalIF":0.0,"publicationDate":"2020-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/pres_a_00326","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50323594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Collaborative virtual environments (CVEs), wherein people can virtually interact with each other via avatars, are becoming increasingly prominent. However, CVEs differ in type of avatar representation and level of behavioral realism afforded to users. The present investigation compared the effect of behavioral realism on users' nonverbal behavior, self-presence, social presence, and interpersonal attraction during a dyadic interaction. Fifty-one dyads (aged 18 to 26) embodied either a full-bodied avatar with mapped hands and inferred arm movements, an avatar consisting of only a floating head and mapped hands, or a static full-bodied avatar. Planned contrasts compared the effect of behavioral realism against no behavioral realism, and compared the effect of low versus high behavioral realism. Results show that participants who embodied the avatar with only a floating head and hands experienced greater social presence, self-presence, and interpersonal attraction than participants who embodied a full-bodied avatar with mapped hands. In contrast, there were no significant differences on these measures between participants in the two mapped-hands conditions and those who embodied a static avatar. Participants in the static-avatar condition rotated their own physical head and hands significantly less than participants in the other two conditions during the dyadic interaction. Additionally, side-to-side head movements were negatively correlated with interpersonal attraction regardless of condition. We discuss implications of the finding that behavioral realism influences nonverbal behavior and communication outcomes.
{"title":"Effect of Behavioral Realism on Social Interactions Inside Collaborative Virtual Environments","authors":"Fernanda Herrera;Soo Youn Oh;Jeremy N. Bailenson","doi":"10.1162/pres_a_00324","DOIUrl":"https://doi.org/10.1162/pres_a_00324","url":null,"abstract":"<para>Collaborative virtual environments (CVEs), wherein people can virtually interact with each other via avatars, are becoming increasingly prominent. However, CVEs differ in type of avatar representation and level of behavioral realism afforded to users. The present investigation compared the effect of behavioral realism on users' nonverbal behavior, self-presence, social presence, and interpersonal attraction during a dyadic interaction. Fifty-one dyads (aged 18 to 26) embodied either a full-bodied avatar with mapped hands and inferred arm movements, an avatar consisting of only a floating head and mapped hands, or a static full-bodied avatar. Planned contrasts compared the effect of behavioral realism against no behavioral realism, and compared the effect of low versus high behavioral realism. Results show that participants who embodied the avatar with only a floating head and hands experienced greater social presence, self-presence, and interpersonal attraction than participants who embodied a full-bodied avatar with mapped hands. In contrast, there were no significant differences on these measures between participants in the two mapped-hands conditions and those who embodied a static avatar. Participants in the static-avatar condition rotated their own physical head and hands significantly less than participants in the other two conditions during the dyadic interaction. Additionally, side-to-side head movements were negatively correlated with interpersonal attraction regardless of condition. We discuss implications of the finding that behavioral realism influences nonverbal behavior and communication outcomes.</para>","PeriodicalId":101038,"journal":{"name":"Presence","volume":"27 2","pages":"163-182"},"PeriodicalIF":0.0,"publicationDate":"2020-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/pres_a_00324","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50323596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spatial augmented reality (SAR) technology allows one to change the appearance of objects by projecting directly onto their surface without the requirement of wearing glasses, and therefore can be used in many practical applications. In this article, we present a human–subject study, which investigates the research question whether it is possible to use SAR to change one's perception of depth and spatial relationships among objects and humans in a real-world environment. Such projected illusions could open up new possibilities, for example, supporting people who suffer from poor depth perception by compensating distance and size misperceptions. We present three monoscopic projection-based techniques that we adapted from visual arts: (i) color temperature, (ii) luminance contrast, and (iii) blur, and show that each of them can significantly change depth perception, even in a real-world environment when displayed with other distance cues. We discuss practical implications and individual differences in the perception of depth between observers, and we outline future directions to influence and improve human depth perception in the real world.
{"title":"Depth Perception and Manipulation in Projection-Based Spatial Augmented Reality","authors":"Susanne Schmidt;Gerd Bruder;Frank Steinicke","doi":"10.1162/pres_a_00329","DOIUrl":"https://doi.org/10.1162/pres_a_00329","url":null,"abstract":"<para>Spatial augmented reality (SAR) technology allows one to change the appearance of objects by projecting directly onto their surface without the requirement of wearing glasses, and therefore can be used in many practical applications. In this article, we present a human–subject study, which investigates the research question whether it is possible to use SAR to change one's perception of depth and spatial relationships among objects and humans in a real-world environment. Such projected illusions could open up new possibilities, for example, supporting people who suffer from poor depth perception by compensating distance and size misperceptions. We present three monoscopic projection-based techniques that we adapted from visual arts: (i) color temperature, (ii) luminance contrast, and (iii) blur, and show that each of them can significantly change depth perception, even in a real-world environment when displayed with other distance cues. We discuss practical implications and individual differences in the perception of depth between observers, and we outline future directions to influence and improve human depth perception in the real world.</para>","PeriodicalId":101038,"journal":{"name":"Presence","volume":"27 2","pages":"242-256"},"PeriodicalIF":0.0,"publicationDate":"2020-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/pres_a_00329","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50323597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zain Abbas;Wei Chao;Chanyoung Park;Vivek Soni;Sang Hoon Hong
In this article, we present an accurate and easy to use augmented reality (AR) application for mobile devices. In addition, we show how to better organize and track artifacts using augmented reality for museum employees using both the mobile device and a 3D graphic model of the museum in a PC server. The AR mobile application can connect to the server, which maintains the status of artifacts including its 3D location and respective room location. The system relies on 3D measurements of the rooms in the museum as well as coordinates of the artifacts and reference markers in the respective rooms. The measured coordinates of the artifacts through the AR mobile application are stored in the server and displayed at the corresponding location of the 3D rendered representation of the room. The mobile application allows museum managers to add, remove, or modify artifacts' locations simply by touching the desired location on the touch screen showing live video with AR overlay. Therefore, the accuracy of the touch screen-based artifact positioning is very important. The accuracy of the proposed technique is validated by evaluating angular error measurements with respect to horizontal and vertical field of views that are 60∘ and 47∘, respectively. The worst-case angular errors in our test environment exhibited 0.60∘ for horizontal and 0.29∘ for vertical, which is calculated to be well within the error due to touch screen sensing accuracy.
{"title":"Augmented Reality-Based Real-Time Accurate Artifact Management System for Museums","authors":"Zain Abbas;Wei Chao;Chanyoung Park;Vivek Soni;Sang Hoon Hong","doi":"10.1162/pres_a_00314","DOIUrl":"https://doi.org/10.1162/pres_a_00314","url":null,"abstract":"<para>In this article, we present an accurate and easy to use augmented reality (AR) application for mobile devices. In addition, we show how to better organize and track artifacts using augmented reality for museum employees using both the mobile device and a 3D graphic model of the museum in a PC server. The AR mobile application can connect to the server, which maintains the status of artifacts including its 3D location and respective room location. The system relies on 3D measurements of the rooms in the museum as well as coordinates of the artifacts and reference markers in the respective rooms. The measured coordinates of the artifacts through the AR mobile application are stored in the server and displayed at the corresponding location of the 3D rendered representation of the room. The mobile application allows museum managers to add, remove, or modify artifacts' locations simply by touching the desired location on the touch screen showing live video with AR overlay. Therefore, the accuracy of the touch screen-based artifact positioning is very important. The accuracy of the proposed technique is validated by evaluating angular error measurements with respect to horizontal and vertical field of views that are 60<inline-formula><mml:math><mml:msup><mml:mrow/><mml:mo>∘</mml:mo></mml:msup></mml:math></inline-formula> and 47<inline-formula><mml:math><mml:msup><mml:mrow/><mml:mo>∘</mml:mo></mml:msup></mml:math></inline-formula>, respectively. The worst-case angular errors in our test environment exhibited 0.60<inline-formula><mml:math><mml:msup><mml:mrow/><mml:mo>∘</mml:mo></mml:msup></mml:math></inline-formula> for horizontal and 0.29<inline-formula><mml:math><mml:msup><mml:mrow/><mml:mo>∘</mml:mo></mml:msup></mml:math></inline-formula> for vertical, which is calculated to be well within the error due to touch screen sensing accuracy.</para>","PeriodicalId":101038,"journal":{"name":"Presence","volume":"27 1","pages":"136-150"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/pres_a_00314","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50225425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When people embody a virtual or a robotic avatar, their sense of self extends to the body of that avatar. We argue that, as a consequence, if the avatar gets harmed, the person embodied in that avatar suffers the harm in the first person. Potential scenarios involving physical or psychological harm caused to avatars gives rise to legal, moral, and policy implications that need to be considered by policymakers. We maintain that the prevailing distinction in law between “property” and “person” categories compromises the legal protection of the embodied users. We advocate for the inclusion of robotic and virtual avatars in a double category, property–person, as the property and the person mingle in one: the avatar. This hybrid category is critical to protecting users of mediated embodiment experiences both from potential physical or psychological harm and property damage.
{"title":"What We Learned from Mediated Embodiment Experiments and Why It Should Matter to Policymakers","authors":"Laura Aymerich-Franch;Eduard Fosch-Villaronga","doi":"10.1162/pres_a_00312","DOIUrl":"https://doi.org/10.1162/pres_a_00312","url":null,"abstract":"<para>When people embody a virtual or a robotic avatar, their sense of self extends to the body of that avatar. We argue that, as a consequence, if the avatar gets harmed, the person embodied in that avatar suffers the harm in the first person. Potential scenarios involving physical or psychological harm caused to avatars gives rise to legal, moral, and policy implications that need to be considered by policymakers. We maintain that the prevailing distinction in law between “property” and “person” categories compromises the legal protection of the embodied users. We advocate for the inclusion of robotic and virtual avatars in a double category, property–person, as the property and the person mingle in one: the avatar. This hybrid category is critical to protecting users of mediated embodiment experiences both from potential physical or psychological harm and property damage.</para>","PeriodicalId":101038,"journal":{"name":"Presence","volume":"27 1","pages":"63-67"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/pres_a_00312","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50225421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual reality (VR) uses sensorial mimetics to construct collective memory in virtual space. The regeneration of high-definition cultural heritage symbols transforms memory into an immediate experience that is constantly being renewed, strengthens the relationship between cultural heritage and contemporary society, and continually affects the persistent renewal of cultural traditions. Hyper-presence is a networked state of cognitive psychology that lies in links, interactions, and exchanges; it is the result of networked social minds and distributed cognition. In the contemporary moment, cultural heritage takes on three types of progressively developed presence: simulated restoration presence, informationally reproduced presence, and symbolically regenerated presence. Symbolic regeneration belongs to the realm of hyper-presence. Building databases with data collected on cultural heritage is the foundation of building a cognitive agent. As a platform, VR becomes an efficient mode of information dissemination, forming an independent presence for cultural heritage through the reproduction of media and information. In a network society, informatized cultural heritage becomes a source for the production of new cultural symbols, and presence is created through the continuous regeneration and dissemination of symbols. Symbols and regenerated symbols combine to constitute the hyper-presence of informatized cultural heritage; people's understanding of cultural heritage therefore exists in an ever-changing state. Intelligences with presence on the network form a complete system, and VR creates comprehensive cognition for the system through high-definition virtuality. Formed in the coordination between intelligences, collective memory creates its hyper-presence today.
{"title":"The “Hyper-Presence” of Cultural Heritage in Shaping Collective Memory","authors":"Zhang Xiao;Yang Deling","doi":"10.1162/pres_a_00321","DOIUrl":"https://doi.org/10.1162/pres_a_00321","url":null,"abstract":"<para>Virtual reality (VR) uses sensorial mimetics to construct collective memory in virtual space. The regeneration of high-definition cultural heritage symbols transforms memory into an immediate experience that is constantly being renewed, strengthens the relationship between cultural heritage and contemporary society, and continually affects the persistent renewal of cultural traditions. Hyper-presence is a networked state of cognitive psychology that lies in links, interactions, and exchanges; it is the result of networked social minds and distributed cognition. In the contemporary moment, cultural heritage takes on three types of progressively developed presence: simulated restoration presence, informationally reproduced presence, and symbolically regenerated presence. Symbolic regeneration belongs to the realm of hyper-presence. Building databases with data collected on cultural heritage is the foundation of building a cognitive agent. As a platform, VR becomes an efficient mode of information dissemination, forming an independent presence for cultural heritage through the reproduction of media and information. In a network society, informatized cultural heritage becomes a source for the production of new cultural symbols, and presence is created through the continuous regeneration and dissemination of symbols. Symbols and regenerated symbols combine to constitute the hyper-presence of informatized cultural heritage; people's understanding of cultural heritage therefore exists in an ever-changing state. Intelligences with presence on the network form a complete system, and VR creates comprehensive cognition for the system through high-definition virtuality. Formed in the coordination between intelligences, collective memory creates its hyper-presence today.</para>","PeriodicalId":101038,"journal":{"name":"Presence","volume":"27 1","pages":"107-135"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/pres_a_00321","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50225424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Drivers' urge to access content on smartphones while driving causes a high number of fatal accidents every year. We explore 3D full-windshield size head-up displays as an opportunity to present such content in a safer manner. In particular, we look into how drivers would personalize such displays and whether it can be considered safe. Firstly, by means of an online survey we identify types of content users access on their smartphones while driving and whether users are interested in the same content on a head-up display. Secondly, we let drivers design personalized 3D layouts and assess how personalization impacts on driving safety. Thirdly, we compare personalized layouts to a one-fits-all layout concept in a 3D driving simulator study regarding safety. We found that drivers' content preferences diverge largely and that most of the personalized layouts do not respect safety sufficiently. The one-fits-all layout led to a better response performance but needs to be modified to consider the drivers' preferences. We discuss the implications of the presented research on road safety and future 3D information placement on head-up displays.
{"title":"Personalizing Content Presentation on Large 3D Head-Up Displays","authors":"Renate Häuslschmid;Donghao Ren;Florian Alt;Andreas Butz;Tobias Höllerer","doi":"10.1162/pres_a_00315","DOIUrl":"https://doi.org/10.1162/pres_a_00315","url":null,"abstract":"<para>Drivers' urge to access content on smartphones while driving causes a high number of fatal accidents every year. We explore 3D full-windshield size head-up displays as an opportunity to present such content in a safer manner. In particular, we look into how drivers would personalize such displays and whether it can be considered safe. Firstly, by means of an online survey we identify types of content users access on their smartphones while driving and whether users are interested in the same content on a head-up display. Secondly, we let drivers design personalized 3D layouts and assess how personalization impacts on driving safety. Thirdly, we compare personalized layouts to a one-fits-all layout concept in a 3D driving simulator study regarding safety. We found that drivers' content preferences diverge largely and that most of the personalized layouts do not respect safety sufficiently. The one-fits-all layout led to a better response performance but needs to be modified to consider the drivers' preferences. We discuss the implications of the presented research on road safety and future 3D information placement on head-up displays.</para>","PeriodicalId":101038,"journal":{"name":"Presence","volume":"27 1","pages":"80-106"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/pres_a_00315","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50225423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}