Junya Mizutani, Keigo Matsumoto, Ryohei Nagao, Takuji Narumi, T. Tanikawa, M. Hirose
Redirection makes it possible to walk around a vast virtual space in a limited real space while providing a natural walking sensation by applying a gain to the amount of movement in a real space. However, manipulating the walking path while keeping it and maintaining the naturalness of walking when turning at a corner cannot be achieved by the existing methods. To realize natural manipulation for turning at a corner, this study proposes novel “turning gains”, which refer to the increase in real and virtual turning degrees. The result of an experiment which aims to estimate the detection thresholds of turning gains indicated that when the turning radius is 0.5 m, discrimination is more difficult compared with the rotation gains $(r=0.0mathrm{m})$.
{"title":"Estimation of Detection Thresholds for Redirected Turning","authors":"Junya Mizutani, Keigo Matsumoto, Ryohei Nagao, Takuji Narumi, T. Tanikawa, M. Hirose","doi":"10.1109/VR.2019.8797976","DOIUrl":"https://doi.org/10.1109/VR.2019.8797976","url":null,"abstract":"Redirection makes it possible to walk around a vast virtual space in a limited real space while providing a natural walking sensation by applying a gain to the amount of movement in a real space. However, manipulating the walking path while keeping it and maintaining the naturalness of walking when turning at a corner cannot be achieved by the existing methods. To realize natural manipulation for turning at a corner, this study proposes novel “turning gains”, which refer to the increase in real and virtual turning degrees. The result of an experiment which aims to estimate the detection thresholds of turning gains indicated that when the turning radius is 0.5 m, discrimination is more difficult compared with the rotation gains $(r=0.0mathrm{m})$.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122149827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In immersive virtual environments (IVE), users' visual and auditory perception is replaced by computer-generated stimuli. Thus, knowing the positions of real objects is crucial for physical safety. While some solutions exist, e. g., using virtual replicas or visible cues indicating the interaction space boundaries, these are limiting the IVE design or depend on the hardware setup. Moreover, most solutions cannot handle lost tracking, erroneous tracker calibration, or moving obstacles. However, these are common scenarios especially for the increasingly popular home virtual reality settings. In this paper, we present a stand-alone hardware device designed to alert IVE users for potential collisions with real-world objects. It uses distance sensors mounted on a head-mounted display (HMD) and vibro-tactile actuators inserted into the HMD's face cushion. We implemented different types of sensor-actuator mappings with the goal to find a mapping function that is minimally obtrusive in normal use, but efficiently alerting in risk situations.
{"title":"Vibro-tactile Feedback for Real-world Awareness in Immersive Virtual Environments","authors":"Dimitar Valkov, L. Linsen","doi":"10.1109/VR.2019.8798036","DOIUrl":"https://doi.org/10.1109/VR.2019.8798036","url":null,"abstract":"In immersive virtual environments (IVE), users' visual and auditory perception is replaced by computer-generated stimuli. Thus, knowing the positions of real objects is crucial for physical safety. While some solutions exist, e. g., using virtual replicas or visible cues indicating the interaction space boundaries, these are limiting the IVE design or depend on the hardware setup. Moreover, most solutions cannot handle lost tracking, erroneous tracker calibration, or moving obstacles. However, these are common scenarios especially for the increasingly popular home virtual reality settings. In this paper, we present a stand-alone hardware device designed to alert IVE users for potential collisions with real-world objects. It uses distance sensors mounted on a head-mounted display (HMD) and vibro-tactile actuators inserted into the HMD's face cushion. We implemented different types of sensor-actuator mappings with the goal to find a mapping function that is minimally obtrusive in normal use, but efficiently alerting in risk situations.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123959343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cheng Yao Wang, Logan Drumm, Christopher Troup, Yingjie Ding, A. S. Won
Distributed teams rely on asynchronous CMC tools to complete collaborative tasks due to the difficulties and costs surrounding scheduling synchronous communications. In this paper, we present VR-Replay, a new communication tool that records and replays avatars with both nonverbal behavior and verbal communication in VR asynchronous collaboration. We describe a study comparing VR-Replay with a desktop-based CVE with audio annotation and a VR immersive CVE with audio annotation. Our results suggest that viewing the replay avatar in VR-Replay improves teamwork, causing people to view their partners as more likable, warm, and friendly. 75% of the users chose VR-Replay as the preferred communication tool in our study.
{"title":"VR-Replay: Capturing and Replaying Avatars in VR for Asynchronous 3D Collaborative Design","authors":"Cheng Yao Wang, Logan Drumm, Christopher Troup, Yingjie Ding, A. S. Won","doi":"10.1109/VR.2019.8797789","DOIUrl":"https://doi.org/10.1109/VR.2019.8797789","url":null,"abstract":"Distributed teams rely on asynchronous CMC tools to complete collaborative tasks due to the difficulties and costs surrounding scheduling synchronous communications. In this paper, we present VR-Replay, a new communication tool that records and replays avatars with both nonverbal behavior and verbal communication in VR asynchronous collaboration. We describe a study comparing VR-Replay with a desktop-based CVE with audio annotation and a VR immersive CVE with audio annotation. Our results suggest that viewing the replay avatar in VR-Replay improves teamwork, causing people to view their partners as more likable, warm, and friendly. 75% of the users chose VR-Replay as the preferred communication tool in our study.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127545735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Presentation of odors is considered to be important as a means for giving a sense of presence. However, the basic odors have not been established to generate any kinds of odors by combining them, like the three primary colors in vision, the basic tastes in gustation. In order to present various kinds of odors, in general, it is necessary to prepare each corresponding odorant. In this research, an odor modulation method is proposed based on a cross-modal effect between olfaction and thermal sensation, which might be able to decrease the number of odorants used to generate odors presented to the user. The experimental results are reported to show that sensation of odors can be modulated by presenting warm/cool air with the odors even if the same odors are presented to the subjects.
{"title":"Odor Modulation by Warming/Cooling Nose Based on Cross-modal Effect","authors":"Yuichi Fujino, H. Matsukura, D. Iwai, Kosuke Sato","doi":"10.1109/VR.2019.8797727","DOIUrl":"https://doi.org/10.1109/VR.2019.8797727","url":null,"abstract":"Presentation of odors is considered to be important as a means for giving a sense of presence. However, the basic odors have not been established to generate any kinds of odors by combining them, like the three primary colors in vision, the basic tastes in gustation. In order to present various kinds of odors, in general, it is necessary to prepare each corresponding odorant. In this research, an odor modulation method is proposed based on a cross-modal effect between olfaction and thermal sensation, which might be able to decrease the number of odorants used to generate odors presented to the user. The experimental results are reported to show that sensation of odors can be modulated by presenting warm/cool air with the odors even if the same odors are presented to the subjects.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129139414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurately predicting where the user of a Virtual Reality (VR) application will be looking at in the near future improves the perceive quality of services, such as adaptive tile-based streaming or personalized online training. However, because of the unpredictability and dissimilarity of user behavior it is still a big challenge. In this work, we propose to use reinforcement learning, in particular contextual bandits, to solve this problem. The proposed solution tackles the prediction in two stages: (1) detection of movement; (2) prediction of direction. In order to prove its potential for VR services, the method was deployed on an adaptive tile-based VR streaming testbed, for benchmarking against a 3D trajectory extrapolation approach. Our results showed a significant improvement in terms of prediction error compared to the benchmark. This reduced prediction error also resulted in an enhancement on the perceived video quality.
{"title":"Contextual Bandit Learning-Based Viewport Prediction for 360 Video","authors":"J. Heyse, M. T. Vega, F. D. Backere, F. Turck","doi":"10.1109/VR.2019.8797830","DOIUrl":"https://doi.org/10.1109/VR.2019.8797830","url":null,"abstract":"Accurately predicting where the user of a Virtual Reality (VR) application will be looking at in the near future improves the perceive quality of services, such as adaptive tile-based streaming or personalized online training. However, because of the unpredictability and dissimilarity of user behavior it is still a big challenge. In this work, we propose to use reinforcement learning, in particular contextual bandits, to solve this problem. The proposed solution tackles the prediction in two stages: (1) detection of movement; (2) prediction of direction. In order to prove its potential for VR services, the method was deployed on an adaptive tile-based VR streaming testbed, for benchmarking against a 3D trajectory extrapolation approach. Our results showed a significant improvement in terms of prediction error compared to the benchmark. This reduced prediction error also resulted in an enhancement on the perceived video quality.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"06 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130482288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Redirected walking enables users to locomote naturally within a virtual environment that is larger than the available physical space. These systems depend on steering algorithms that continuously redirect users within limited real world boundaries. While a majority of the most recent research has focused on predictive algorithms, it is often necessary to utilize reactive approaches when the user's path is unconstrained. Unfortunately, previously proposed reactive algorithms assume a completely empty space with convex boundaries and perform poorly in complex real world spaces containing obstacles. To overcome this limitation, we present Push/Pull Reactive (P2R), a novel algorithm that uses an artificial potential function to steer users away from potential collisions. We also introduce three new reset strategies and conducted an experiment to evaluate which one performs best when used with P2R. Simulation results demonstrate that the proposed approach outperforms the previous state-of-the-art reactive algorithm in non-convex spaces with and without interior obstacles.
{"title":"A General Reactive Algorithm for Redirected Walking Using Artificial Potential Functions","authors":"Jerald Thomas, Evan Suma Rosenberg","doi":"10.1109/VR.2019.8797983","DOIUrl":"https://doi.org/10.1109/VR.2019.8797983","url":null,"abstract":"Redirected walking enables users to locomote naturally within a virtual environment that is larger than the available physical space. These systems depend on steering algorithms that continuously redirect users within limited real world boundaries. While a majority of the most recent research has focused on predictive algorithms, it is often necessary to utilize reactive approaches when the user's path is unconstrained. Unfortunately, previously proposed reactive algorithms assume a completely empty space with convex boundaries and perform poorly in complex real world spaces containing obstacles. To overcome this limitation, we present Push/Pull Reactive (P2R), a novel algorithm that uses an artificial potential function to steer users away from potential collisions. We also introduce three new reset strategies and conducted an experiment to evaluate which one performs best when used with P2R. Simulation results demonstrate that the proposed approach outperforms the previous state-of-the-art reactive algorithm in non-convex spaces with and without interior obstacles.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129532857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hugo Brument, Iana Podkosova, H. Kaufmann, A. Olivier, F. Argelaguet
This paper investigates whether the body anticipation synergies in real environments (REs) are preserved during navigation in virtual environments (VEs). Experimental studies related to the control of human locomotion in REs during curved trajectories report a top-down reorientation strategy with the reorientation of the gaze anticipating the reorientation of head, the shoulders and finally the global body motion. This anticipation behavior provides a stable reference frame to the walker to control and reorient his/her body according to the future walking direction. To assess body anticipation during navigation in VEs, we conducted an experiment where participants, wearing a head-mounted display, performed a lemniscate trajectory in a virtual environment (VE) using five different navigation techniques, including walking, virtual steering (head, hand or torso steering) and passive navigation. For the purpose of this experiment, we designed a new control law based on the power-law relation between speed and curvature during human walking. Taken together our results showed a similar ordered top-down sequence of reorientation of the gaze, head and shoulders during curved trajectories between walking in REs and in VEs (for all the evaluated techniques). However, the anticipation mechanism was significantly higher for the walking condition compared to the others. The results presented in this paper pave the way to the better understanding of the underlying mechanisms of human navigation in VEs and to the design of navigation techniques more adapted to humans.
{"title":"Virtual vs. Physical Navigation in VR: Study of Gaze and Body Segments Temporal Reorientation Behaviour","authors":"Hugo Brument, Iana Podkosova, H. Kaufmann, A. Olivier, F. Argelaguet","doi":"10.1109/VR.2019.8797721","DOIUrl":"https://doi.org/10.1109/VR.2019.8797721","url":null,"abstract":"This paper investigates whether the body anticipation synergies in real environments (REs) are preserved during navigation in virtual environments (VEs). Experimental studies related to the control of human locomotion in REs during curved trajectories report a top-down reorientation strategy with the reorientation of the gaze anticipating the reorientation of head, the shoulders and finally the global body motion. This anticipation behavior provides a stable reference frame to the walker to control and reorient his/her body according to the future walking direction. To assess body anticipation during navigation in VEs, we conducted an experiment where participants, wearing a head-mounted display, performed a lemniscate trajectory in a virtual environment (VE) using five different navigation techniques, including walking, virtual steering (head, hand or torso steering) and passive navigation. For the purpose of this experiment, we designed a new control law based on the power-law relation between speed and curvature during human walking. Taken together our results showed a similar ordered top-down sequence of reorientation of the gaze, head and shoulders during curved trajectories between walking in REs and in VEs (for all the evaluated techniques). However, the anticipation mechanism was significantly higher for the walking condition compared to the others. The results presented in this paper pave the way to the better understanding of the underlying mechanisms of human navigation in VEs and to the design of navigation techniques more adapted to humans.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"43 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129738254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While we are in the midst of a renaissance of interest in augmented reality (AR), there remain a small number of application domains which have seen significant development. Education is a domain that often drives innovation with emerging technologies. One particular subject which benefits from additional visualization capabilities is physics. In this paper, we present the results of a series of interviews with secondary school teachers about their experience with AR and the features which would be most beneficial to them from a pedagogical perspective. To gather meaningful information, a prototype application was developed and presented to the teachers. Based on the feedback collected from the teachers, we present a set of design recommendations for AR physics education tools, as well as other useful collects comments.
{"title":"Determining Design Requirements for AR Physics Education Applications","authors":"Corey R. Pittman, J. Laviola","doi":"10.1109/VR.2019.8797908","DOIUrl":"https://doi.org/10.1109/VR.2019.8797908","url":null,"abstract":"While we are in the midst of a renaissance of interest in augmented reality (AR), there remain a small number of application domains which have seen significant development. Education is a domain that often drives innovation with emerging technologies. One particular subject which benefits from additional visualization capabilities is physics. In this paper, we present the results of a series of interviews with secondary school teachers about their experience with AR and the features which would be most beneficial to them from a pedagogical perspective. To gather meaningful information, a prototype application was developed and presented to the teachers. Based on the feedback collected from the teachers, we present a set of design recommendations for AR physics education tools, as well as other useful collects comments.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121057676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Souichi Tashiro, Hideaki Uchiyama, D. Thomas, R. Taniguchi
This paper presents a 3D positioning system based on one-handed thumb interactions for simple 3D annotation placement with a smart-phone. To place an annotation at a target point in the real environment, the 3D coordinate of the point is computed by interactively selecting the corresponding points in multiple views by users while performing SLAM. Generally, it is difficult for users to precisely select an intended pixel on the touchscreen. Therefore, we propose to compute the 3D coordinate from multiple observations with a robust estimator to have the tolerance to the inaccurate user inputs. In addition, we developed three pixel selection methods based on one-handed thumb interactions. A pixel is selected at the thumb position at a live view in FingAR, the position of a reticle marker at a live view in SnipAR, or that of a movable reticle marker at a freezed view in FreezAR. In the preliminary evaluation, we investigated the 3D positioning accuracy of each method.
{"title":"3D Positioning System Based on One-handed Thumb Interactions for 3D Annotation Placement","authors":"Souichi Tashiro, Hideaki Uchiyama, D. Thomas, R. Taniguchi","doi":"10.1109/VR.2019.8797979","DOIUrl":"https://doi.org/10.1109/VR.2019.8797979","url":null,"abstract":"This paper presents a 3D positioning system based on one-handed thumb interactions for simple 3D annotation placement with a smart-phone. To place an annotation at a target point in the real environment, the 3D coordinate of the point is computed by interactively selecting the corresponding points in multiple views by users while performing SLAM. Generally, it is difficult for users to precisely select an intended pixel on the touchscreen. Therefore, we propose to compute the 3D coordinate from multiple observations with a robust estimator to have the tolerance to the inaccurate user inputs. In addition, we developed three pixel selection methods based on one-handed thumb interactions. A pixel is selected at the thumb position at a live view in FingAR, the position of a reticle marker at a live view in SnipAR, or that of a movable reticle marker at a freezed view in FreezAR. In the preliminary evaluation, we investigated the 3D positioning accuracy of each method.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121391149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roughly speaking, there are two strategies to provide users with a virtual realistic perceptual experience. One is to make the physical input to the user�s sensory systems close to that of the real experience (physics-based approach). The other one, which sensory scientists (like us) prefer, is to make the response pattern of the users� sensory system close to that of the real experience (perception-based approach). Using cognitive/neuro-scientific knowledge about human visual processing, we are able to control cortical perceptual representations in addition to sensor responses, and then achieve perceptual effects that would be hard to obtain with the straightforward physics-based approach. For instance, recent research on human material perception has suggested simple image-based methods to control glossiness, wetness, subthreshold fineness and liquid viscosity. Deformation Lamp/Hengento (Kawabe et al., 2016) is a projection mapping technique that can produce an illusory movement of a real static object. Although only a dynamic gray-scale pattern is projected, it effectively drives visual motion sensors in the human brain, and then induces a “motion capture” effect on the colors and textures of the original static object. In Hidden Stereo (Fukiage et al., 2017), multi-scale phase-based binocular disparity signals effectively drives human stereo mechanisms, while the disparity-inducing image components for the left and right images are cancelled out with each other when they are fused. As a result, viewers with stereo glasses perceive 3D images, while those without glasses can enjoy 2D images with no visible ghosts. I will discuss how vision science helps virtual reality technologies, and how vision science is helped by application to the cutting-edge technologies.
粗略地说,为用户提供虚拟逼真的感知体验有两种策略。一种方法是让用户感官系统的物理输入接近真实体验(基于物理的方法)。另一种是感官科学家(像我们一样)更喜欢的,即使用户的感官系统的反应模式接近于真实体验(基于感知的方法)。利用关于人类视觉处理的认知/神经科学知识,我们能够控制皮层感知表征和传感器反应,然后实现用直接的基于物理的方法难以获得的感知效果。例如,最近对人类材料感知的研究提出了简单的基于图像的方法来控制光泽度,湿度,亚阈值细度和液体粘度。变形灯/Hengento (Kawabe et al., 2016)是一种投影映射技术,可以产生真实静态物体的虚幻运动。虽然只投射一个动态的灰度模式,但它能有效地驱动人脑中的视觉运动传感器,然后对原始静态物体的颜色和纹理产生“运动捕捉”效果。在Hidden Stereo (Fukiage et al., 2017)中,基于多尺度相位的双目视差信号有效地驱动了人类的立体机制,而左右图像的视差诱导图像分量在融合时相互抵消。因此,戴立体眼镜的观众可以看到3D图像,而不戴眼镜的观众可以看到没有可见鬼魂的2D图像。我将讨论视觉科学如何帮助虚拟现实技术,以及视觉科学如何通过应用于前沿技术而得到帮助。
{"title":"Keynote Speaker: Hacking Human Visual Perception","authors":"S. Nishida","doi":"10.1109/VR.2019.8798316","DOIUrl":"https://doi.org/10.1109/VR.2019.8798316","url":null,"abstract":"Roughly speaking, there are two strategies to provide users with a virtual realistic perceptual experience. One is to make the physical input to the user�s sensory systems close to that of the real experience (physics-based approach). The other one, which sensory scientists (like us) prefer, is to make the response pattern of the users� sensory system close to that of the real experience (perception-based approach). Using cognitive/neuro-scientific knowledge about human visual processing, we are able to control cortical perceptual representations in addition to sensor responses, and then achieve perceptual effects that would be hard to obtain with the straightforward physics-based approach. For instance, recent research on human material perception has suggested simple image-based methods to control glossiness, wetness, subthreshold fineness and liquid viscosity. Deformation Lamp/Hengento (Kawabe et al., 2016) is a projection mapping technique that can produce an illusory movement of a real static object. Although only a dynamic gray-scale pattern is projected, it effectively drives visual motion sensors in the human brain, and then induces a “motion capture” effect on the colors and textures of the original static object. In Hidden Stereo (Fukiage et al., 2017), multi-scale phase-based binocular disparity signals effectively drives human stereo mechanisms, while the disparity-inducing image components for the left and right images are cancelled out with each other when they are fused. As a result, viewers with stereo glasses perceive 3D images, while those without glasses can enjoy 2D images with no visible ghosts. I will discuss how vision science helps virtual reality technologies, and how vision science is helped by application to the cutting-edge technologies.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122965762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}