Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00055
Xue Teng, R. Allison, L. Wilcox
Virtual reality (VR) is distinguished by the rich, multimodal, im-mersive sensory information and affordances provided to the user. However, when moving about an immersive virtual world the vi-sual display often conflicts with other sensory cues due to design, the nature of the simulation, or to system limitations (for example impoverished vestibular motion cues during acceleration in racing games). Given that conflicts between sensory cues have been as-sociated with disorientation or discomfort, and theoretically could distort spatial perception, it is important that we understand how and when they are manifested in the user experience. To this end, this set of experiments investigates the impact of mismatch between physical and virtual motion parallax on the per-ception of the depth of an apparently perpendicular dihedral angle (a fold) and its distance. We applied gain distortions between visual and kinesthetic head motion during lateral sway movements and measured the effect of gain on depth, distance and lateral space compression. We found that under monocular viewing, observers made smaller object depth and distance settings especially when the gain was greater than 1. Estimates of target distance declined with increasing gain under monocular viewing. Similarly, mean set depth decreased with increasing gain under monocular viewing, except at 6.0 m. The effect of gain was minimal when observers viewed the stimulus binocularly. Further, binocular viewing (stereopsis) improved the precision but not necessarily the accuracy of gain perception. Overall, the lateral compression of space was similar in the stereoscopic and monocular test conditions. Taken together, our results show that the use of large presentation distances (at 6 m) combined with binocular cues to depth and distance enhanced humans' tolerance to visual and kinesthetic mismatch.
{"title":"Manipulation of Motion Parallax Gain Distorts Perceived Distance and Object Depth in Virtual Reality","authors":"Xue Teng, R. Allison, L. Wilcox","doi":"10.1109/VR55154.2023.00055","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00055","url":null,"abstract":"Virtual reality (VR) is distinguished by the rich, multimodal, im-mersive sensory information and affordances provided to the user. However, when moving about an immersive virtual world the vi-sual display often conflicts with other sensory cues due to design, the nature of the simulation, or to system limitations (for example impoverished vestibular motion cues during acceleration in racing games). Given that conflicts between sensory cues have been as-sociated with disorientation or discomfort, and theoretically could distort spatial perception, it is important that we understand how and when they are manifested in the user experience. To this end, this set of experiments investigates the impact of mismatch between physical and virtual motion parallax on the per-ception of the depth of an apparently perpendicular dihedral angle (a fold) and its distance. We applied gain distortions between visual and kinesthetic head motion during lateral sway movements and measured the effect of gain on depth, distance and lateral space compression. We found that under monocular viewing, observers made smaller object depth and distance settings especially when the gain was greater than 1. Estimates of target distance declined with increasing gain under monocular viewing. Similarly, mean set depth decreased with increasing gain under monocular viewing, except at 6.0 m. The effect of gain was minimal when observers viewed the stimulus binocularly. Further, binocular viewing (stereopsis) improved the precision but not necessarily the accuracy of gain perception. Overall, the lateral compression of space was similar in the stereoscopic and monocular test conditions. Taken together, our results show that the use of large presentation distances (at 6 m) combined with binocular cues to depth and distance enhanced humans' tolerance to visual and kinesthetic mismatch.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115404036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a within-subject study to investigate the effects of leading and following behaviors on user visual attention behaviors when collaborating with a virtual agent (VA) during performing transportation tasks in immersive virtual environments. The task was to carry a target object from a location to a predefined location. There were two conditions, namely leader VA (LVA) and follower VA (FVA). The leader gave instructions to the follower to perform actions. In the FVA condition, users played the leader role, while they played the follower role in the LVA condition. The users and the VA communicated via spoken language. During the experiment, participants wore a head-mounted display and performed real walking in a room. In each condition, each participant performed 20 trials of object transportation for different types of objects. Our preliminary results revealed significant differences in the user visual attention behaviors between the follower and leader VA conditions during the transportation tasks.
{"title":"Comparing Visual Attention with Leading and Following Virtual Agents in a Collaborative Perception-Action Task in VR","authors":"Sai-Keung Wong, Matias Volonte, Kuan-Yu Liu, Elham Ebrahimi, Sabarish V. Babu","doi":"10.1109/VR55154.2023.00031","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00031","url":null,"abstract":"This paper presents a within-subject study to investigate the effects of leading and following behaviors on user visual attention behaviors when collaborating with a virtual agent (VA) during performing transportation tasks in immersive virtual environments. The task was to carry a target object from a location to a predefined location. There were two conditions, namely leader VA (LVA) and follower VA (FVA). The leader gave instructions to the follower to perform actions. In the FVA condition, users played the leader role, while they played the follower role in the LVA condition. The users and the VA communicated via spoken language. During the experiment, participants wore a head-mounted display and performed real walking in a room. In each condition, each participant performed 20 trials of object transportation for different types of objects. Our preliminary results revealed significant differences in the user visual attention behaviors between the follower and leader VA conditions during the transportation tasks.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123815565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00072
Zubin Choudhary, Nahal Norouzi, A. Erickson, Ryan Schubert, G. Bruder, Gregory F. Welch
The expression of human emotion is integral to social interaction, and in virtual reality it is increasingly common to develop virtual avatars that attempt to convey emotions by mimicking these visual and aural cues, i.e. the facial and vocal expressions. However, errors in (or the absence of) facial tracking can result in the rendering of incorrect facial expressions on these virtual avatars. For example, a virtual avatar may speak with a happy or unhappy vocal inflection while their facial expression remains otherwise neutral. In circumstances where there is conflict between the avatar's facial and vocal expressions, it is possible that users will incorrectly interpret the avatar's emotion, which may have unintended consequences in terms of social influence or in terms of the outcome of the interaction. In this paper, we present a human-subjects study (N = 22) aimed at understanding the impact of conflicting facial and vocal emotional expressions. Specifically we explored three levels of emotional valence (unhappy, neutral, and happy) expressed in both visual (facial) and aural (vocal) forms. We also investigate three levels of head scales (down-scaled, accurate, and up-scaled) to evaluate whether head scale affects user interpretation of the conveyed emotion. We find significant effects of different multimodal expressions on happiness and trust perception, while no significant effect was observed for head scales. Evidence from our results suggest that facial expressions have a stronger impact than vocal expressions. Additionally, as the difference between the two expressions increase, the less predictable the multimodal expression becomes. For example, for the happy-looking and happy-sounding multimodal expression, we expect and see high happiness rating and high trust, however if one of the two expressions change, this mismatch makes the expression less predictable. We discuss the relationships, implications, and guidelines for social applications that aim to leverage multimodal social cues.
{"title":"Exploring the Social Influence of Virtual Humans Unintentionally Conveying Conflicting Emotions","authors":"Zubin Choudhary, Nahal Norouzi, A. Erickson, Ryan Schubert, G. Bruder, Gregory F. Welch","doi":"10.1109/VR55154.2023.00072","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00072","url":null,"abstract":"The expression of human emotion is integral to social interaction, and in virtual reality it is increasingly common to develop virtual avatars that attempt to convey emotions by mimicking these visual and aural cues, i.e. the facial and vocal expressions. However, errors in (or the absence of) facial tracking can result in the rendering of incorrect facial expressions on these virtual avatars. For example, a virtual avatar may speak with a happy or unhappy vocal inflection while their facial expression remains otherwise neutral. In circumstances where there is conflict between the avatar's facial and vocal expressions, it is possible that users will incorrectly interpret the avatar's emotion, which may have unintended consequences in terms of social influence or in terms of the outcome of the interaction. In this paper, we present a human-subjects study (N = 22) aimed at understanding the impact of conflicting facial and vocal emotional expressions. Specifically we explored three levels of emotional valence (unhappy, neutral, and happy) expressed in both visual (facial) and aural (vocal) forms. We also investigate three levels of head scales (down-scaled, accurate, and up-scaled) to evaluate whether head scale affects user interpretation of the conveyed emotion. We find significant effects of different multimodal expressions on happiness and trust perception, while no significant effect was observed for head scales. Evidence from our results suggest that facial expressions have a stronger impact than vocal expressions. Additionally, as the difference between the two expressions increase, the less predictable the multimodal expression becomes. For example, for the happy-looking and happy-sounding multimodal expression, we expect and see high happiness rating and high trust, however if one of the two expressions change, this mismatch makes the expression less predictable. We discuss the relationships, implications, and guidelines for social applications that aim to leverage multimodal social cues.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121157583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/vr55154.2023.00030
Michael Nitsche, B. Bosley, S. Primo, Jisu Park, Daniel Carr
Age-related Macular Degeneration (AMD) is the leading cause of vision loss among persons over 50. We present a two-part interface consisting of a VR-based visualization for AMD patients and an interconnected doctor interface to optimize this VR view. It focuses on remapping imagery to provide customized image optimizations. The system allows doctors to generate a tailored, patient-specific VR visualization. We pilot tested the doctor interface (n=10) with eye care professionals. The results indicate the potential of VR-based eye care for doctors to help visually-impaired patients, but also show a necessary training phase to establish new technologies in vision rehabilitation.
{"title":"Remapping Control in VR for Patients with AMD","authors":"Michael Nitsche, B. Bosley, S. Primo, Jisu Park, Daniel Carr","doi":"10.1109/vr55154.2023.00030","DOIUrl":"https://doi.org/10.1109/vr55154.2023.00030","url":null,"abstract":"Age-related Macular Degeneration (AMD) is the leading cause of vision loss among persons over 50. We present a two-part interface consisting of a VR-based visualization for AMD patients and an interconnected doctor interface to optimize this VR view. It focuses on remapping imagery to provide customized image optimizations. The system allows doctors to generate a tailored, patient-specific VR visualization. We pilot tested the doctor interface (n=10) with eye care professionals. The results indicate the potential of VR-based eye care for doctors to help visually-impaired patients, but also show a necessary training phase to establish new technologies in vision rehabilitation.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126465027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00051
Zimu Yi, Ke Xie, Jiahui Lyu, Minglun Gong, Hui Huang
Image-based rendering (IBR) technique enables presenting real scenes interactively to viewers and hence is a key component for implementing VR telepresence. The quality of IBR results depends on the set of pre-captured views, the rendering algorithm used, and the camera parameters of the novel view to be synthesized. Numerous methods were proposed for optimizing the set of captured images and enhancing the rendering algorithms. However, from which regions IBR methods can synthesize satisfactory results is not yet well studied. In this work, we introduce the concept of renderability, which predicts the quality of IBR results at any given viewpoint and view direction. Consequently, the renderability values evaluated for the 5D camera parameter space form a field, which effectively guides viewpoint/trajectory selection for IBR, especially for challenging large-scale 3D scenes. To demonstrate this capability, we designed 2 VR applications: a path planner that allows users to navigate through sparsely captured scenes with controllable rendering quality and a view selector that provides an overview for a scene from diverse and high quality perspectives. We believe the renderability concept, the proposed evaluation method, and the suggested applications will motivate and facilitate the use of IBR in various interactive settings.
{"title":"Where to Render: Studying Renderability for IBR of Large-Scale Scenes","authors":"Zimu Yi, Ke Xie, Jiahui Lyu, Minglun Gong, Hui Huang","doi":"10.1109/VR55154.2023.00051","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00051","url":null,"abstract":"Image-based rendering (IBR) technique enables presenting real scenes interactively to viewers and hence is a key component for implementing VR telepresence. The quality of IBR results depends on the set of pre-captured views, the rendering algorithm used, and the camera parameters of the novel view to be synthesized. Numerous methods were proposed for optimizing the set of captured images and enhancing the rendering algorithms. However, from which regions IBR methods can synthesize satisfactory results is not yet well studied. In this work, we introduce the concept of renderability, which predicts the quality of IBR results at any given viewpoint and view direction. Consequently, the renderability values evaluated for the 5D camera parameter space form a field, which effectively guides viewpoint/trajectory selection for IBR, especially for challenging large-scale 3D scenes. To demonstrate this capability, we designed 2 VR applications: a path planner that allows users to navigate through sparsely captured scenes with controllable rendering quality and a view selector that provides an overview for a scene from diverse and high quality perspectives. We believe the renderability concept, the proposed evaluation method, and the suggested applications will motivate and facilitate the use of IBR in various interactive settings.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127973451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00075
Masatoshi Iuchi, Yuito Hirohashi, H. Oku
In this study, we propose a method for an aerial display. The method uses a high-speed gaze control system and a laser display to perform projection mapping on a screen at a distance, which is suspended from a flying drone. A prototype system was developed and successfully demonstrated dynamic projection mapping on a screen attached to a flying drone at a distance of about 36 m, which indicated the effectiveness of the proposed method.
{"title":"Proposal for an aerial display using dynamic projection mapping on a distant flying screen","authors":"Masatoshi Iuchi, Yuito Hirohashi, H. Oku","doi":"10.1109/VR55154.2023.00075","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00075","url":null,"abstract":"In this study, we propose a method for an aerial display. The method uses a high-speed gaze control system and a laser display to perform projection mapping on a screen at a distance, which is suspended from a flying drone. A prototype system was developed and successfully demonstrated dynamic projection mapping on a screen attached to a flying drone at a distance of about 36 m, which indicated the effectiveness of the proposed method.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121418828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/vr55154.2023.00089
Praneeth Kumar Chakravarthula
The 2023 VGTC Virtual Reality Best Dissertation Award goes to Praneeth Kumar Chakravarthula, a 2021 graduate from the University of North Carolina at Chapel Hill, for his dissertation entitled “Towards Everyday-use Augmented Reality Eyeglasses”, under the supervision of Prof. Henry Fuchs. Praneeth Chakravarthula is currently a research fellow at Princeton University and a Research Assistant Professor at the University of North Carolina at Chapel Hill. His research interests lie at the intersection of optics, graphics, perception, optimization and machine learning. Dr. Chakravarthula obtained his Ph.D. from UNC Chapel Hill under the supervision of Prof. Henry Fuchs. His Ph.D. dissertation makes progress “towards everyday-use augmented reality eyeglasses” and makes significant advances in three distinct areas: 1) holographic displays and advanced algorithms for generating high-quality true 3D holographic images, 2) hardware and software for robust and comprehensive 3D eye tracking via Purkinje images and 3) automatic focus adjusting AR display eyeglasses for well-focused virtual and real imagery, towards potentially achieving 20/20 vision for users of all ages. Since the eyes cannot focus at very near distances, existing AR/VR head mounted displays use bulky lenses to virtually project the display panel at a long distance that the eyes can comfortably focus. However, this results in not only uncomfortably increasing the bulk of the display but also results in severely affecting the natural functioning of the human visual system by causing
{"title":"VGTC Virtual Reality Best Dissertation Award","authors":"Praneeth Kumar Chakravarthula","doi":"10.1109/vr55154.2023.00089","DOIUrl":"https://doi.org/10.1109/vr55154.2023.00089","url":null,"abstract":"The 2023 VGTC Virtual Reality Best Dissertation Award goes to Praneeth Kumar Chakravarthula, a 2021 graduate from the University of North Carolina at Chapel Hill, for his dissertation entitled “Towards Everyday-use Augmented Reality Eyeglasses”, under the supervision of Prof. Henry Fuchs. Praneeth Chakravarthula is currently a research fellow at Princeton University and a Research Assistant Professor at the University of North Carolina at Chapel Hill. His research interests lie at the intersection of optics, graphics, perception, optimization and machine learning. Dr. Chakravarthula obtained his Ph.D. from UNC Chapel Hill under the supervision of Prof. Henry Fuchs. His Ph.D. dissertation makes progress “towards everyday-use augmented reality eyeglasses” and makes significant advances in three distinct areas: 1) holographic displays and advanced algorithms for generating high-quality true 3D holographic images, 2) hardware and software for robust and comprehensive 3D eye tracking via Purkinje images and 3) automatic focus adjusting AR display eyeglasses for well-focused virtual and real imagery, towards potentially achieving 20/20 vision for users of all ages. Since the eyes cannot focus at very near distances, existing AR/VR head mounted displays use bulky lenses to virtually project the display panel at a long distance that the eyes can comfortably focus. However, this results in not only uncomfortably increasing the bulk of the display but also results in severely affecting the natural functioning of the human visual system by causing","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117262773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00029
Yuxi Wang, H. Ling, Bingyao Huang
Full projector compensation is a practical task of projector-camera systems. It aims to find a projector input image, named compensation image, such that when projected it cancels the geometric and photometric distortions due to the physical environment and hardware. State-of-the-art methods use deep learning to address this problem and show promising performance for low-resolution setups. However, directly applying deep learning to high-resolution setups is impractical due to the long training time and high memory cost. To address this issue, this paper proposes a practical full compensation solution. Firstly, we design an attention-based grid refinement network to improve geometric correction quality. Secondly, we integrate a novel sampling scheme into an end-to-end compensation network to alleviate computation and introduce attention blocks to preserve key features. Finally, we construct a benchmark dataset for high-resolution projector full compensation. In experiments, our method demonstrates clear advantages in both efficiency and quality.
{"title":"CompenHR: Efficient Full Compensation for High-resolution Projector","authors":"Yuxi Wang, H. Ling, Bingyao Huang","doi":"10.1109/VR55154.2023.00029","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00029","url":null,"abstract":"Full projector compensation is a practical task of projector-camera systems. It aims to find a projector input image, named compensation image, such that when projected it cancels the geometric and photometric distortions due to the physical environment and hardware. State-of-the-art methods use deep learning to address this problem and show promising performance for low-resolution setups. However, directly applying deep learning to high-resolution setups is impractical due to the long training time and high memory cost. To address this issue, this paper proposes a practical full compensation solution. Firstly, we design an attention-based grid refinement network to improve geometric correction quality. Secondly, we integrate a novel sampling scheme into an end-to-end compensation network to alleviate computation and introduce attention blocks to preserve key features. Finally, we construct a benchmark dataset for high-resolution projector full compensation. In experiments, our method demonstrates clear advantages in both efficiency and quality.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126381648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00035
Martin Feick, K. P. Regitz, Anthony Tang, Tobias Jungbluth, Maurice Rekrut, Antonio Krüger
Hand redirection is effective so long as the introduced offsets are not noticeably disruptive to users. In this work we investigate the use of physiological and interaction data to detect movement discrepancies between a user's real and virtual hand, pushing towards a novel approach to identify discrepancies which are too large and therefore can be noticed. We ran a study with 22 participants, collecting EEG, ECG, EDA, RSP, and interaction data. Our results suggest that EEG and interaction data can be reliably used to detect visuo-motor discrepancies, whereas ECG and RSP seem to suffer from inconsistencies. Our findings also show that participants quickly adapt to large discrepancies, and that they constantly attempt to establish a stable mental model of their environment. Together, these findings suggest that there is no absolute threshold for possible non-detectable discrepancies; instead, it depends primarily on participants' most recent experience with this kind of interaction.
{"title":"Investigating Noticeable Hand Redirection in Virtual Reality using Physiological and Interaction Data","authors":"Martin Feick, K. P. Regitz, Anthony Tang, Tobias Jungbluth, Maurice Rekrut, Antonio Krüger","doi":"10.1109/VR55154.2023.00035","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00035","url":null,"abstract":"Hand redirection is effective so long as the introduced offsets are not noticeably disruptive to users. In this work we investigate the use of physiological and interaction data to detect movement discrepancies between a user's real and virtual hand, pushing towards a novel approach to identify discrepancies which are too large and therefore can be noticed. We ran a study with 22 participants, collecting EEG, ECG, EDA, RSP, and interaction data. Our results suggest that EEG and interaction data can be reliably used to detect visuo-motor discrepancies, whereas ECG and RSP seem to suffer from inconsistencies. Our findings also show that participants quickly adapt to large discrepancies, and that they constantly attempt to establish a stable mental model of their environment. Together, these findings suggest that there is no absolute threshold for possible non-detectable discrepancies; instead, it depends primarily on participants' most recent experience with this kind of interaction.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131230749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00047
Alberto Cannavò, F. G. Pratticò, Alberto Bruno, Fabrizio Lamberti
Technology is disrupting the way films involving visual effects are produced. Chroma-key, LED walls, motion capture (mocap), 3D visual storyboards, and simulcams are only a few examples of the many changes introduced in the cinema industry over the last years. Although these technologies are getting commonplace, they are presenting new, unexplored challenges to the actors. In particular, when mocap is used to record the actors' movements with the aim of animating digital character models, an increase in the workload can be easily expected for people on stage. In fact, actors have to largely rely on their imagination to understand what the digitally created characters will be actually seeing and feeling. This paper focuses on this specific domain, and aims to demonstrate how Augmented Reality (AR) can be helpful for actors when shooting mocap scenes. To this purpose, we devised a system named AR-MoCap that can be used by actors for rehearsing the scene in AR on the real set before actually shooting it. Through an Optical See-Through Head-Mounted Display (OST-HMD), an actor can see, e.g., the digital characters of other actors wearing mocap suits overlapped in real-time to their bodies. Experimental results showed that, compared to the traditional approach based on physical props and other cues, the devised system can help the actors to position themselves and direct their gaze while shooting the scene, while also improving spatial and social presence, as well as perceived effectiveness.
{"title":"AR-MoCap: Using Augmented Reality to Support Motion Capture Acting","authors":"Alberto Cannavò, F. G. Pratticò, Alberto Bruno, Fabrizio Lamberti","doi":"10.1109/VR55154.2023.00047","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00047","url":null,"abstract":"Technology is disrupting the way films involving visual effects are produced. Chroma-key, LED walls, motion capture (mocap), 3D visual storyboards, and simulcams are only a few examples of the many changes introduced in the cinema industry over the last years. Although these technologies are getting commonplace, they are presenting new, unexplored challenges to the actors. In particular, when mocap is used to record the actors' movements with the aim of animating digital character models, an increase in the workload can be easily expected for people on stage. In fact, actors have to largely rely on their imagination to understand what the digitally created characters will be actually seeing and feeling. This paper focuses on this specific domain, and aims to demonstrate how Augmented Reality (AR) can be helpful for actors when shooting mocap scenes. To this purpose, we devised a system named AR-MoCap that can be used by actors for rehearsing the scene in AR on the real set before actually shooting it. Through an Optical See-Through Head-Mounted Display (OST-HMD), an actor can see, e.g., the digital characters of other actors wearing mocap suits overlapped in real-time to their bodies. Experimental results showed that, compared to the traditional approach based on physical props and other cues, the devised system can help the actors to position themselves and direct their gaze while shooting the scene, while also improving spatial and social presence, as well as perceived effectiveness.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133506581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}