In this demo, we share our findings in building real-time 3D experiences with consumer headsets so as to go beyond the first person shooter gaming usage for which they are designed. We address the key problems of such user experiences which are to isolate the user from his own body, have him lose contact with other people in the room and with the real world. To solve those issues we use an off-the-shelf Kinect for Windows v2 to inject some reality in the virtuality. A video describing the demo is available here [1].
{"title":"“Never Blind VR” enhancing the virtual reality headset experience with augmented virtuality","authors":"David Nahon, G. Subileau, Benjamin Capel","doi":"10.1109/VR.2015.7223438","DOIUrl":"https://doi.org/10.1109/VR.2015.7223438","url":null,"abstract":"In this demo, we share our findings in building real-time 3D experiences with consumer headsets so as to go beyond the first person shooter gaming usage for which they are designed. We address the key problems of such user experiences which are to isolate the user from his own body, have him lose contact with other people in the room and with the real world. To solve those issues we use an off-the-shelf Kinect for Windows v2 to inject some reality in the virtuality. A video describing the demo is available here [1].","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131845725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhihan Lv, Shengzhong Feng, Liangbing Feng, Haibo Li
A touch-less interaction technology on vision based wearable device is designed and evaluated. Users interact with the application with dynamic hands/feet gestures in front of the camera. Several proof-of-concept prototypes with eleven dynamic gestures are developed based on the touch-less interaction. At last, a comparing user study evaluation is proposed to demonstrate the usability of the touch-less approach, as well as the impact on user's emotion, running on a wearable framework or Google Glass.
{"title":"Extending touch-less interaction on vision based wearable device","authors":"Zhihan Lv, Shengzhong Feng, Liangbing Feng, Haibo Li","doi":"10.1109/VR.2015.7223380","DOIUrl":"https://doi.org/10.1109/VR.2015.7223380","url":null,"abstract":"A touch-less interaction technology on vision based wearable device is designed and evaluated. Users interact with the application with dynamic hands/feet gestures in front of the camera. Several proof-of-concept prototypes with eleven dynamic gestures are developed based on the touch-less interaction. At last, a comparing user study evaluation is proposed to demonstrate the usability of the touch-less approach, as well as the impact on user's emotion, running on a wearable framework or Google Glass.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115401843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weiya Chen, Nicolas Ladevèze, C. Clavel, D. Mestre, P. Bourdot
In a Multi-stereoscopic immersive system, several users sharing the same restricted workspace, e.g. a CAVE, may need to perform independent navigation to achieve loosely coupled collaboration tasks for a complex scenario. In this context, a proper navigation paradigm should provide users both an efficient control of virtual navigation and a guarantee of users' safety in the real workspace relative to the display system and between users. In this aim, we propose several alterations of the human joystick metaphor by introducing implicit adaptive control to allow safe individual navigation for multiple users. We conducted a user study with an object-finding task in a double-stereoscopic CAVE-like system to evaluate both users' navigation performance in the virtual world and their behavior in the real workspace under different conditions. The results highlight that the improved paradigm allows two users to navigate independently despite physical system limitations.
{"title":"User cohabitation in multi-stereoscopic immersive virtual environment for individual navigation tasks","authors":"Weiya Chen, Nicolas Ladevèze, C. Clavel, D. Mestre, P. Bourdot","doi":"10.1109/VR.2015.7223323","DOIUrl":"https://doi.org/10.1109/VR.2015.7223323","url":null,"abstract":"In a Multi-stereoscopic immersive system, several users sharing the same restricted workspace, e.g. a CAVE, may need to perform independent navigation to achieve loosely coupled collaboration tasks for a complex scenario. In this context, a proper navigation paradigm should provide users both an efficient control of virtual navigation and a guarantee of users' safety in the real workspace relative to the display system and between users. In this aim, we propose several alterations of the human joystick metaphor by introducing implicit adaptive control to allow safe individual navigation for multiple users. We conducted a user study with an object-finding task in a double-stereoscopic CAVE-like system to evaluate both users' navigation performance in the virtual world and their behavior in the real workspace under different conditions. The results highlight that the improved paradigm allows two users to navigate independently despite physical system limitations.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115899757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Santorineos, Stavroula Zoi, N. Dimitriadi, Taxiarchis Diamantopoulos, John Bardakos, Christina Chrysanthopoulou, I. Mavridou, Annalise Meli, Nikos Papadopoulos, Argyro Papathanasiou, Maria Velaora
“Eπıλoγη in Crisis” is a work in progress that has been developed by the research group of the Greek-French Master entitled "Art, virtual reality and multiuser systems of artistic expression", in a collaboration between the Athens School of Fine Arts and the University Paris8 Saint-Denis. It concerns an interactive project which is in-between a research tool and experimental game, that takes place in a virtual reality environment. It aims to immerse the player inside a system in crisis, so that he is not a mere spectator but feels that he shares responsibility for the crisis and has to act to resolve it. The actions of the player are measured and “judged” (by the game mechanism itself), thus determining the stability or instability of the system.
“Eπıλoγη in Crisis”是由希腊-法国大师的研究小组开发的一项正在进行的工作,名为“艺术,虚拟现实和艺术表达的多用户系统”,由雅典美术学院和巴黎圣德尼大学合作。它涉及一个介于研究工具和实验游戏之间的互动项目,发生在虚拟现实环境中。它的目的是让玩家沉浸在一个危机系统中,这样他就不仅仅是一个旁观者,而是觉得他对危机负有责任,必须采取行动来解决它。玩家的行动被衡量和“判断”(游戏机制本身),从而决定了系统的稳定性或不稳定性。
{"title":"Eπıλoγη∗ in Crisis∗∗","authors":"M. Santorineos, Stavroula Zoi, N. Dimitriadi, Taxiarchis Diamantopoulos, John Bardakos, Christina Chrysanthopoulou, I. Mavridou, Annalise Meli, Nikos Papadopoulos, Argyro Papathanasiou, Maria Velaora","doi":"10.1109/VR.2015.7223440","DOIUrl":"https://doi.org/10.1109/VR.2015.7223440","url":null,"abstract":"“Eπıλoγη in Crisis” is a work in progress that has been developed by the research group of the Greek-French Master entitled \"Art, virtual reality and multiuser systems of artistic expression\", in a collaboration between the Athens School of Fine Arts and the University Paris8 Saint-Denis. It concerns an interactive project which is in-between a research tool and experimental game, that takes place in a virtual reality environment. It aims to immerse the player inside a system in crisis, so that he is not a mere spectator but feels that he shares responsibility for the crisis and has to act to resolve it. The actions of the player are measured and “judged” (by the game mechanism itself), thus determining the stability or instability of the system.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114622809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jian Chen, Wesley Griffin, Henan Zhao, J. Terrill, G. Bryant
We designed and evaluated SplitVector, a new vector field display approach to help scientists perform new discrimination tasks on scientific data shown in virtual environments (VEs). Our empirical study compared the SplitVector approach with three other approaches of direct linear representation, log, and text display common in information-rich VEs or IRVEs. Our results suggest the following: (1) SplitVectors improve the accuracy by about 10 times compared to the linear mapping and by 4 times to log in discrimination tasks; (2) SplitVectors lead to no significant differences from the IRVE text display approach, yet reduce the clutter; and (3) SplitVector improved task performance in both mono and stereoscopy conditions.
{"title":"Validation of SplitVector encoding and stereoscopy for quantitative visualization of quantum physics data in virtual environments","authors":"Jian Chen, Wesley Griffin, Henan Zhao, J. Terrill, G. Bryant","doi":"10.1109/VR.2015.7223347","DOIUrl":"https://doi.org/10.1109/VR.2015.7223347","url":null,"abstract":"We designed and evaluated SplitVector, a new vector field display approach to help scientists perform new discrimination tasks on scientific data shown in virtual environments (VEs). Our empirical study compared the SplitVector approach with three other approaches of direct linear representation, log, and text display common in information-rich VEs or IRVEs. Our results suggest the following: (1) SplitVectors improve the accuracy by about 10 times compared to the linear mapping and by 4 times to log in discrimination tasks; (2) SplitVectors lead to no significant differences from the IRVE text display approach, yet reduce the clutter; and (3) SplitVector improved task performance in both mono and stereoscopy conditions.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115116920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a system that enables dynamic 3D interaction with real and virtual objects using an optical see-through head-mounted display and an RGB-D camera. The virtual objects move according to physical laws. The system uses a physics engine for calculation of the motion of virtual objects and collision detection. In addition, the system performs collision detection between virtual objects and real objects in the three-dimensional scene obtained from the camera which is dynamically updated. A user wears the device and interacts with virtual objects in a seated position. The system gives users a great sense of reality through an interaction with virtual objects.
{"title":"Dynamic 3D interaction using an optical See-through HMD","authors":"Nozomi Sugiura, T. Komuro","doi":"10.1109/VR.2015.7223444","DOIUrl":"https://doi.org/10.1109/VR.2015.7223444","url":null,"abstract":"We propose a system that enables dynamic 3D interaction with real and virtual objects using an optical see-through head-mounted display and an RGB-D camera. The virtual objects move according to physical laws. The system uses a physics engine for calculation of the motion of virtual objects and collision detection. In addition, the system performs collision detection between virtual objects and real objects in the three-dimensional scene obtained from the camera which is dynamically updated. A user wears the device and interacts with virtual objects in a seated position. The system gives users a great sense of reality through an interaction with virtual objects.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117270667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In previous papers, a novel haploscope-based AR environment was implemented [1, 3]. In that system, a participant looks through a set of reflective lenses onto a real-world environment. However, at the same time, there are monitors to the side displaying a virtual object. This object is reflected onto the lenses and is thus, from the viewpoint of the participant, overlaid onto the real environment. In Hua [1], some initial work was done designing a calibration procedure for this haploscope-based AR environment. The current work seeks to modify and expand Hua's original calibration procedure to make it both more effective and more efficient. As part of developing this new calibration procedure, this paper examines potential sources of error and recommends processes and steps for reducing or eliminating these potential error sources.
{"title":"A procedure for accurate calibration of a tabletop haploscope AR environment","authors":"Nate Phillips, J. Swan","doi":"10.1109/VR.2015.7223394","DOIUrl":"https://doi.org/10.1109/VR.2015.7223394","url":null,"abstract":"In previous papers, a novel haploscope-based AR environment was implemented [1, 3]. In that system, a participant looks through a set of reflective lenses onto a real-world environment. However, at the same time, there are monitors to the side displaying a virtual object. This object is reflected onto the lenses and is thus, from the viewpoint of the participant, overlaid onto the real environment. In Hua [1], some initial work was done designing a calibration procedure for this haploscope-based AR environment. The current work seeks to modify and expand Hua's original calibration procedure to make it both more effective and more efficient. As part of developing this new calibration procedure, this paper examines potential sources of error and recommends processes and steps for reducing or eliminating these potential error sources.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116798236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical virtual environments are currently far from accessible to the blind as most of their content is visual. While several previous environment-specific tools have indeed increased accessibility to specific environments they do not offer a generic solution. This is especially unfortunate as such environments hold great potential for the blind, e.g., for safe orientation and learning. Visual-to-audio Sensory Substitution Devices (SSDs) can potentially increase their accessibility in such a generic fashion by sonifying the on-screen content regardless of the specific environment. Using SSDs also taps into the skills gained from using these same SSDs for completely different tasks, including in the real world. However, whether congenitally blind users will be able to use this information to perceive and interact successfully virtually is currently unclear. We tested this using the EyeMusic SSD, which conveys shape and color information, to perform virtual tasks otherwise not possible without vision. We show that these tasks can be accomplished by the congenitally blind.
{"title":"Blind in a virtual world: Using sensory substitution for generically increasing the accessibility of graphical virtual environments","authors":"S. Maidenbaum, S. Abboud, Galit Buchs, A. Amedi","doi":"10.1109/VR.2015.7223381","DOIUrl":"https://doi.org/10.1109/VR.2015.7223381","url":null,"abstract":"Graphical virtual environments are currently far from accessible to the blind as most of their content is visual. While several previous environment-specific tools have indeed increased accessibility to specific environments they do not offer a generic solution. This is especially unfortunate as such environments hold great potential for the blind, e.g., for safe orientation and learning. Visual-to-audio Sensory Substitution Devices (SSDs) can potentially increase their accessibility in such a generic fashion by sonifying the on-screen content regardless of the specific environment. Using SSDs also taps into the skills gained from using these same SSDs for completely different tasks, including in the real world. However, whether congenitally blind users will be able to use this information to perceive and interact successfully virtually is currently unclear. We tested this using the EyeMusic SSD, which conveys shape and color information, to perform virtual tasks otherwise not possible without vision. We show that these tasks can be accomplished by the congenitally blind.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130020545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David J. Zielinski, H. Rao, M. Sommer, Regis Kopper
In virtual reality applications, there is an aim to provide real time graphics which run at high refresh rates. However, there are many situations in which this is not possible due to simulation or rendering issues. When running at low frame rates, several aspects of the user experience are affected. For example, each frame is displayed for an extended period of time, causing a high persistence image artifact. The effect of this artifact is that movement may lose continuity, and the image jumps from one frame to another. In this paper, we discuss our initial exploration of the effects of high persistence frames caused by low refresh rates and compare it to high frame rates and to a technique we developed to mitigate the effects of low frame rates. In this technique, the low frame rate simulation images are displayed with low persistence by blanking out the display during the extra time such image would be displayed. In order to isolate the visual effects, we constructed a simulator for low and high persistence displays that does not affect input latency. A controlled user study comparing the three conditions for the tasks of 3D selection and navigation was conducted. Results indicate that the low persistence display technique may not negatively impact user experience or performance as compared to the high persistence case. Directions for future work on the use of low persistence displays for low frame rate situations are discussed.
{"title":"Exploring the effects of image persistence in low frame rate virtual environments","authors":"David J. Zielinski, H. Rao, M. Sommer, Regis Kopper","doi":"10.1109/VR.2015.7223319","DOIUrl":"https://doi.org/10.1109/VR.2015.7223319","url":null,"abstract":"In virtual reality applications, there is an aim to provide real time graphics which run at high refresh rates. However, there are many situations in which this is not possible due to simulation or rendering issues. When running at low frame rates, several aspects of the user experience are affected. For example, each frame is displayed for an extended period of time, causing a high persistence image artifact. The effect of this artifact is that movement may lose continuity, and the image jumps from one frame to another. In this paper, we discuss our initial exploration of the effects of high persistence frames caused by low refresh rates and compare it to high frame rates and to a technique we developed to mitigate the effects of low frame rates. In this technique, the low frame rate simulation images are displayed with low persistence by blanking out the display during the extra time such image would be displayed. In order to isolate the visual effects, we constructed a simulator for low and high persistence displays that does not affect input latency. A controlled user study comparing the three conditions for the tasks of 3D selection and navigation was conducted. Results indicate that the low persistence display technique may not negatively impact user experience or performance as compared to the high persistence case. Directions for future work on the use of low persistence displays for low frame rate situations are discussed.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130200085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jooyoung Lee, Hasup Lee, Boyu Gao, Hyungseok Kim, Jee-In Kim
We introduce a method for using multiple devices as windows for interacting with 3-D virtual environment. Motivation of our work has come from generating collaborative workspace with multiple devices which can be found in our daily lives, like desktop PC and mobile devices. Provided with life size virtual environment, each device shows a scene of 3-D virtual space on its position and direction, and users would be able to perceive virtual space in more immersive way with it. By adopting mobile device to our system, users not only see outer space of stationary screen by relocating their mobile device, but also have personalized view in working space. To acquiring knowledge of device's pose and orientation, we adopt vision-based approaches. For the last, we introduce an implementation of a system for managing multiple device and letting them have synchronized performance.
{"title":"Multiple devices as windows for virtual environment","authors":"Jooyoung Lee, Hasup Lee, Boyu Gao, Hyungseok Kim, Jee-In Kim","doi":"10.1109/VR.2015.7223374","DOIUrl":"https://doi.org/10.1109/VR.2015.7223374","url":null,"abstract":"We introduce a method for using multiple devices as windows for interacting with 3-D virtual environment. Motivation of our work has come from generating collaborative workspace with multiple devices which can be found in our daily lives, like desktop PC and mobile devices. Provided with life size virtual environment, each device shows a scene of 3-D virtual space on its position and direction, and users would be able to perceive virtual space in more immersive way with it. By adopting mobile device to our system, users not only see outer space of stationary screen by relocating their mobile device, but also have personalized view in working space. To acquiring knowledge of device's pose and orientation, we adopt vision-based approaches. For the last, we introduce an implementation of a system for managing multiple device and letting them have synchronized performance.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132875151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}