We present a new 360 camera design for creating 360 videos for immersive VR experiences. We place eight fish-eye lenses on a circle. Four interlaced fish-eye lenses are slightly re-oriented up in order to cover the scene above. To the best of our knowledge, our camera has the smallest diameter of any existing stereo multi-lens rig on the market. Our camera can be used to create 2D, 3D and 6DoF multi-format 360 videos. Due to its compact design, the minimum safe distance of our new camera is very short (approximately 30cm). This allows users to create special intimate immersive experiences. We also propose to characterize the camera design using the fractal ratio of the distance of adjacent view points and interpupillary distance. While most early camera designs have fractal ratio $> 1$ or $=1$, our camera has the fractal ratio $(< 1)$. Moreover, with adjustable rendering interpupillary distance, our camera can be used to flexibly control the interpupillary distance for creating 3D 360 videos. Our camera design has high fault tolerance and it can continue operating properly even in the event of the failure of some individual lenses.
{"title":"A New 360 Camera Design for Multi Format VR Experiences","authors":"Xinyu Zhang, Yao Zhao, Nikk Mitchell, Wensong Li","doi":"10.1109/VR.2019.8798226","DOIUrl":"https://doi.org/10.1109/VR.2019.8798226","url":null,"abstract":"We present a new 360 camera design for creating 360 videos for immersive VR experiences. We place eight fish-eye lenses on a circle. Four interlaced fish-eye lenses are slightly re-oriented up in order to cover the scene above. To the best of our knowledge, our camera has the smallest diameter of any existing stereo multi-lens rig on the market. Our camera can be used to create 2D, 3D and 6DoF multi-format 360 videos. Due to its compact design, the minimum safe distance of our new camera is very short (approximately 30cm). This allows users to create special intimate immersive experiences. We also propose to characterize the camera design using the fractal ratio of the distance of adjacent view points and interpupillary distance. While most early camera designs have fractal ratio $> 1$ or $=1$, our camera has the fractal ratio $(< 1)$. Moreover, with adjustable rendering interpupillary distance, our camera can be used to flexibly control the interpupillary distance for creating 3D 360 videos. Our camera design has high fault tolerance and it can continue operating properly even in the event of the failure of some individual lenses.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124129193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aviv Elor, Steven Lessard, M. Teodorescu, S. Kurniawan
Immersive Virtual Reality paired with soft robotics may be syner-gized to create personalized assistive therapy experiences. Virtual worlds hold power to stimulate the user with newly instigated low-cost, high-performance commercial Virtual Reality (VR) devices to enable engaging and accurate physical therapy. Soft robotic wear-ables are a versatile tool in such stimulation. This preliminary study investigates a novel rehabilitative VR experience, Project Butterfly (PBF), that synergizes VR Mirror Visual Feedback Therapy with soft robotic exoskeletal support. Nine users of ranging ability explore an immersive gamified physio-therapy experience by following and protecting a virtual butterfly, completed with an actuated robotic wearable that motivates and assists the user to perform rehabilitative physical movement. Specifically, the goals of this study are to evaluate the feasibility, ease-of-use, and comfort of the proposed system. The study concludes with a set of design considerations for future immersive physio-rehab robotic-assisted games.
{"title":"Project Butterfly: Synergizing Immersive Virtual Reality with Actuated Soft Exosuit for Upper-Extremity Rehabilitation","authors":"Aviv Elor, Steven Lessard, M. Teodorescu, S. Kurniawan","doi":"10.1109/VR.2019.8798014","DOIUrl":"https://doi.org/10.1109/VR.2019.8798014","url":null,"abstract":"Immersive Virtual Reality paired with soft robotics may be syner-gized to create personalized assistive therapy experiences. Virtual worlds hold power to stimulate the user with newly instigated low-cost, high-performance commercial Virtual Reality (VR) devices to enable engaging and accurate physical therapy. Soft robotic wear-ables are a versatile tool in such stimulation. This preliminary study investigates a novel rehabilitative VR experience, Project Butterfly (PBF), that synergizes VR Mirror Visual Feedback Therapy with soft robotic exoskeletal support. Nine users of ranging ability explore an immersive gamified physio-therapy experience by following and protecting a virtual butterfly, completed with an actuated robotic wearable that motivates and assists the user to perform rehabilitative physical movement. Specifically, the goals of this study are to evaluate the feasibility, ease-of-use, and comfort of the proposed system. The study concludes with a set of design considerations for future immersive physio-rehab robotic-assisted games.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116942559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we introduce LargeSpace, the world's largest immersive display, and discuss the principles of its design. To clarify the design of large-scale projection-based immersive displays, we address the optimum screen shape, projection approach, and arrangement of projectors and tracking cameras. In addition, a novel distortion correction method for panoramic stereo rendering is described. The method can be applied to any projection-based immersive display with any screen shape, and can generate real-time panoramic-stereoscopic views from the viewpoints of tracked participants. To validate the design principles and the rendering algorithm, we implement the LargeSpace and confirm that the method can generate the correct perspective from any position inside the screen viewing area. We implement several applications and show that large-scale immersive displays can be used in the fields of art and experimental psychology.
{"title":"Large-Scale Projection-Based Immersive Display: The Design and Implementation of LargeSpace","authors":"Hikaru Takatori, M. Hiraiwa, H. Yano, Hiroo Iwata","doi":"10.1109/VR.2019.8798019","DOIUrl":"https://doi.org/10.1109/VR.2019.8798019","url":null,"abstract":"In this paper, we introduce LargeSpace, the world's largest immersive display, and discuss the principles of its design. To clarify the design of large-scale projection-based immersive displays, we address the optimum screen shape, projection approach, and arrangement of projectors and tracking cameras. In addition, a novel distortion correction method for panoramic stereo rendering is described. The method can be applied to any projection-based immersive display with any screen shape, and can generate real-time panoramic-stereoscopic views from the viewpoints of tracked participants. To validate the design principles and the rendering algorithm, we implement the LargeSpace and confirm that the method can generate the correct perspective from any position inside the screen viewing area. We implement several applications and show that large-scale immersive displays can be used in the fields of art and experimental psychology.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125761972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mid-air imaging technology expresses how virtual images move about in the real world. A conventional mid-air image display using a retro-transmissive optical element moves a light source the distance a mid-air image is moved. In conventional mid-air image displays, the linear actuator that moves a display as a light source makes the system large. In order to solve this problem, we designed an optical system that realizes high-speed movement of mid-air images without a linear actuator. We propose an optical system that moves the virtual image of the light source at a high speed by generating the virtual image of the light source with a rotating mirror and light source by the motor.
{"title":"Optical System That Forms a Mid-Air Image Moving at High Speed in the Depth Direction","authors":"Yui Osato, Naoya Koizurni","doi":"10.1109/VR.2019.8798235","DOIUrl":"https://doi.org/10.1109/VR.2019.8798235","url":null,"abstract":"Mid-air imaging technology expresses how virtual images move about in the real world. A conventional mid-air image display using a retro-transmissive optical element moves a light source the distance a mid-air image is moved. In conventional mid-air image displays, the linear actuator that moves a display as a light source makes the system large. In order to solve this problem, we designed an optical system that realizes high-speed movement of mid-air images without a linear actuator. We propose an optical system that moves the virtual image of the light source at a high speed by generating the virtual image of the light source with a rotating mirror and light source by the motor.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130087796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
José L. Dorado, Pablo Fiqueroa, J. Chardonnet, F. Mérienne, J. T. Hernández
Studies using virtual reality environments (VE) have shown that subjects can perform path integration tasks with acceptable performance. However, in these studies, subjects could walk naturally across large tracking areas, or researchers provided them with large- immersive displays. Unfortunately, these configurations are far from current consumer-oriented VEs (COVEs), and little is known about how their limitations influence this task. Using a triangle completion paradigm, we assessed the subjects' spatial performance when developing path integration tasks in two consumer-oriented displays (an HTC Vive and a GearVR) and two consumer-oriented interaction devices (a Virtuix Omni motion platform and a Touchpad Control). Our results show that when locomotion is available (motion platform condition), there exist significant effects regarding the display and the path. In contrast, when locomotion is mediated no effect was found. Some future research directions are therefore proposed.
{"title":"Perceived Space and Spatial Performance during Path-Integration Tasks in Consumer-Oriented Virtual Reality Environments","authors":"José L. Dorado, Pablo Fiqueroa, J. Chardonnet, F. Mérienne, J. T. Hernández","doi":"10.1109/VR.2019.8798344","DOIUrl":"https://doi.org/10.1109/VR.2019.8798344","url":null,"abstract":"Studies using virtual reality environments (VE) have shown that subjects can perform path integration tasks with acceptable performance. However, in these studies, subjects could walk naturally across large tracking areas, or researchers provided them with large- immersive displays. Unfortunately, these configurations are far from current consumer-oriented VEs (COVEs), and little is known about how their limitations influence this task. Using a triangle completion paradigm, we assessed the subjects' spatial performance when developing path integration tasks in two consumer-oriented displays (an HTC Vive and a GearVR) and two consumer-oriented interaction devices (a Virtuix Omni motion platform and a Touchpad Control). Our results show that when locomotion is available (motion platform condition), there exist significant effects regarding the display and the path. In contrast, when locomotion is mediated no effect was found. Some future research directions are therefore proposed.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115285944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Novel virtual turning for stationary VR environments, accomplishing to reorient the gazed view towards the center, is proposed. Prompt reorientation during rapid head motion and blinking performed unnoticeable scene switching that achieved the seamless user experience, especially for the wide-angle turning. Whereas, continuous narrow-angle turning by horizontally rotating the virtual world corresponding to the face orientation achieved enhanced sense of reality. The proposal comprises a hybrid of these two turning schemes. Experiments using simulator sickness and presence questionnaires revealed that our methods achieved comparable or lower sickness scores and higher presence scores than conventional smooth and snap turns.
{"title":"Reorient the Gazed Scene Towards the Center: Novel Virtual Turning Using Head and Gaze Motions and Blink","authors":"Yoshikazu Onuki, I. Kumazawa","doi":"10.1109/VR.2019.8798120","DOIUrl":"https://doi.org/10.1109/VR.2019.8798120","url":null,"abstract":"Novel virtual turning for stationary VR environments, accomplishing to reorient the gazed view towards the center, is proposed. Prompt reorientation during rapid head motion and blinking performed unnoticeable scene switching that achieved the seamless user experience, especially for the wide-angle turning. Whereas, continuous narrow-angle turning by horizontally rotating the virtual world corresponding to the face orientation achieved enhanced sense of reality. The proposal comprises a hybrid of these two turning schemes. Experiments using simulator sickness and presence questionnaires revealed that our methods achieved comparable or lower sickness scores and higher presence scores than conventional smooth and snap turns.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125212602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this contribution, we design, implement and evaluate the pedagogical benefits of a novel interactive note taking interface (iVRNote) in VR for the purpose of learning and reflection lectures. In future VR learning environments, students would have challenges in taking notes when they wear a head mounted display (HMD). To solve this problem, we installed a digital tablet on the desk and provided several tools in VR to facilitate the learning experience. Specifically, we track the stylus' position and orientation in the physical world and then render a virtual stylus in VR. In other words, when students see a virtual stylus somewhere on the desk, they can reach out with their hand for the physical stylus. The information provided will also enable them to know where they will draw or write before the stylus touches the tablet. Since the presented iVRNote featuring our note taking system is a digital environment, we also enable students save efforts in taking extensive notes by providing several functions, such as post-editing and picture taking, so that they can pay more attention to lectures in VR. We also record the time of each stroke on the note to help students review a lecture. They can select a part of their note to revisit the corresponding segment in a virtual online lecture. Figures and the accompanying video demonstrate the feasibility of the presented iVRNote system. To evaluate the system, we conducted a user study with 20 participants to assess the preference and pedagogical benefits of the iVRNote interface. The feedback provided by the participants were overall positive and indicated that the iVRNote interface could be potentially effective in VR learning experiences.
{"title":"iVRNote: Design, Creation and Evaluation of an Interactive Note-Taking Interface for Study and Reflection in VR Learning Environments","authors":"Yi-Ting Chen, Chi-Hsuan Hsu, Chih-Han Chung, Yu-Shuen Wang, Sabarish V. Babu","doi":"10.1109/VR.2019.8798338","DOIUrl":"https://doi.org/10.1109/VR.2019.8798338","url":null,"abstract":"In this contribution, we design, implement and evaluate the pedagogical benefits of a novel interactive note taking interface (iVRNote) in VR for the purpose of learning and reflection lectures. In future VR learning environments, students would have challenges in taking notes when they wear a head mounted display (HMD). To solve this problem, we installed a digital tablet on the desk and provided several tools in VR to facilitate the learning experience. Specifically, we track the stylus' position and orientation in the physical world and then render a virtual stylus in VR. In other words, when students see a virtual stylus somewhere on the desk, they can reach out with their hand for the physical stylus. The information provided will also enable them to know where they will draw or write before the stylus touches the tablet. Since the presented iVRNote featuring our note taking system is a digital environment, we also enable students save efforts in taking extensive notes by providing several functions, such as post-editing and picture taking, so that they can pay more attention to lectures in VR. We also record the time of each stroke on the note to help students review a lecture. They can select a part of their note to revisit the corresponding segment in a virtual online lecture. Figures and the accompanying video demonstrate the feasibility of the presented iVRNote system. To evaluate the system, we conducted a user study with 20 participants to assess the preference and pedagogical benefits of the iVRNote interface. The feedback provided by the participants were overall positive and indicated that the iVRNote interface could be potentially effective in VR learning experiences.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116712822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kadek Ananta Satriadi, Barrett Ens, Maxime Cordeil, B. Jenny, Tobias Czauderna, Wesley Willett
Freehand gesture interaction has long been proposed as a ‘natural’ input method for Augmented Reality (AR) applications, yet has been little explored for intensive applications like multiscale navigation. In multiscale navigation, such as digital map navigation, pan and zoom are the predominant interactions. A position-based input mapping (e.g. grabbing metaphor) is intuitive for such interactions, but is prone to arm fatigue. This work focuses on improving digital map navigation in AR with mid-air hand gestures, using a horizontal intangible map display. First, we conducted a user study to explore the effects of handedness (unimanual and bimanual) and input mapping (position-based and rate-based). From these findings we designed DiveZoom and TerraceZoom, two novel hybrid techniques that smoothly transition between position- and rate-based mappings. A second user study evaluated these designs. Our results indicate that the introduced input-mapping transitions can reduce perceived arm fatigue with limited impact on performance.
{"title":"Augmented Reality Map Navigation with Freehand Gestures","authors":"Kadek Ananta Satriadi, Barrett Ens, Maxime Cordeil, B. Jenny, Tobias Czauderna, Wesley Willett","doi":"10.1109/VR.2019.8798340","DOIUrl":"https://doi.org/10.1109/VR.2019.8798340","url":null,"abstract":"Freehand gesture interaction has long been proposed as a ‘natural’ input method for Augmented Reality (AR) applications, yet has been little explored for intensive applications like multiscale navigation. In multiscale navigation, such as digital map navigation, pan and zoom are the predominant interactions. A position-based input mapping (e.g. grabbing metaphor) is intuitive for such interactions, but is prone to arm fatigue. This work focuses on improving digital map navigation in AR with mid-air hand gestures, using a horizontal intangible map display. First, we conducted a user study to explore the effects of handedness (unimanual and bimanual) and input mapping (position-based and rate-based). From these findings we designed DiveZoom and TerraceZoom, two novel hybrid techniques that smoothly transition between position- and rate-based mappings. A second user study evaluated these designs. Our results indicate that the introduced input-mapping transitions can reduce perceived arm fatigue with limited impact on performance.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117161111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We generate synthetic annotated data for learning 3D human pose estimation using an egocentric fisheye camera. Synthetic humans are rendered from a virtual fisheye camera, with a random background, random clothing, random lighting parameters. In addition to RGB images, we generate ground truth of 2D/3D poses and location heat-maps. Capturing huge and various images and labeling manually for learning are not required. This approach will be used for the challenging situation such as capturing training data in sports.
{"title":"Generating Synthetic Humans for Learning 3D Pose Estimation","authors":"Kohei Aso, D. Hwang, H. Koike","doi":"10.1109/VR.2019.8797894","DOIUrl":"https://doi.org/10.1109/VR.2019.8797894","url":null,"abstract":"We generate synthetic annotated data for learning 3D human pose estimation using an egocentric fisheye camera. Synthetic humans are rendered from a virtual fisheye camera, with a random background, random clothing, random lighting parameters. In addition to RGB images, we generate ground truth of 2D/3D poses and location heat-maps. Capturing huge and various images and labeling manually for learning are not required. This approach will be used for the challenging situation such as capturing training data in sports.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127453389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katsuyoshi Hotta, O. Prima, Takashi Imabuchi, Hisayoshi Ito
Visual field defects (VFDs) is difficult to recognize by most patients because of the filling-in mechanism in the human brain. The current visual field test displays light sources within the range of the effective visual field and takes the responses from the patient after recognizing this light stimulus. Since, these responses are determined subjectively by the patient, the resulted measure may be less reliable. This method may take more than 30 minutes, requiring the patient to fix his gaze and head where it may give a physical burden in the patient. In this study, we propose an active visual field testing (AVFT) based on a high-speed virtual reality head-mounted display (VR-HMD) eye tracker which enables to increase the testing reliability and to reduce the physical burden during the test. Our tracker runs up to 240Hz allowing the measurement of rapid eye movement to precisely detect visual fixation and saccades which provide essential elements to evaluate defects in the visual field. The characteristics of visual fixation and saccades are utilized to confirm when each stimulus is recognized by the patient during the test. Our experiment shows that each test can be conducted in 5 minutes.
{"title":"VR-HMD Eye Tracker in Active Visual Field Testing","authors":"Katsuyoshi Hotta, O. Prima, Takashi Imabuchi, Hisayoshi Ito","doi":"10.1109/VR.2019.8798030","DOIUrl":"https://doi.org/10.1109/VR.2019.8798030","url":null,"abstract":"Visual field defects (VFDs) is difficult to recognize by most patients because of the filling-in mechanism in the human brain. The current visual field test displays light sources within the range of the effective visual field and takes the responses from the patient after recognizing this light stimulus. Since, these responses are determined subjectively by the patient, the resulted measure may be less reliable. This method may take more than 30 minutes, requiring the patient to fix his gaze and head where it may give a physical burden in the patient. In this study, we propose an active visual field testing (AVFT) based on a high-speed virtual reality head-mounted display (VR-HMD) eye tracker which enables to increase the testing reliability and to reduce the physical burden during the test. Our tracker runs up to 240Hz allowing the measurement of rapid eye movement to precisely detect visual fixation and saccades which provide essential elements to evaluate defects in the visual field. The characteristics of visual fixation and saccades are utilized to confirm when each stimulus is recognized by the patient during the test. Our experiment shows that each test can be conducted in 5 minutes.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123769245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}