One of the main challenges facing the practical utilization of new assistive technology for the blind is the process of training. This is true both for mastering the device and even more importantly for learning to use it in specific environments. Such training usually requires external help which is not always available, can be costly, and attempts to navigate without such preparation can be dangerous. Here we will demonstrate several games which were developed in our lab as part of the training programs for the EyeCane, which augments the traditional White-Cane with additional distance and angles. These games avoid the above-mentioned problems of availability, cost and safety and additionally utilize gamification techniques to boost the training process. Visitors to the demonstration will use these devices to perform simple in-game virtual tasks such as finding the exit from a room or avoiding obstacles while wearing blindfolds.
{"title":"Blind in a virtual world: Mobility-training virtual reality games for users who are blind","authors":"S. Maidenbaum, A. Amedi","doi":"10.1109/VR.2015.7223435","DOIUrl":"https://doi.org/10.1109/VR.2015.7223435","url":null,"abstract":"One of the main challenges facing the practical utilization of new assistive technology for the blind is the process of training. This is true both for mastering the device and even more importantly for learning to use it in specific environments. Such training usually requires external help which is not always available, can be costly, and attempts to navigate without such preparation can be dangerous. Here we will demonstrate several games which were developed in our lab as part of the training programs for the EyeCane, which augments the traditional White-Cane with additional distance and angles. These games avoid the above-mentioned problems of availability, cost and safety and additionally utilize gamification techniques to boost the training process. Visitors to the demonstration will use these devices to perform simple in-game virtual tasks such as finding the exit from a room or avoiding obstacles while wearing blindfolds.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127434695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Measuring visual attention has become an important tool during product development. Attention maps are important qualitative visualizations to communicate results within the team and to stakeholders. We have developed a GPU-accelerated approach which allows for real-time generation of attention maps for 3D models that can, e.g., be used for on-the-fly visualizations of visual attention distributions and for the generation of heat-map textures for offline high-quality renderings. The presented approach is unique in that it works with monocular and binocular data, respects the depth of focus, can handle moving objects and is ready to be used for selective rendering.
{"title":"GPU-accelerated attention map generation for dynamic 3D scenes","authors":"Thies Pfeiffer, Cem Memili","doi":"10.1109/VR.2015.7223393","DOIUrl":"https://doi.org/10.1109/VR.2015.7223393","url":null,"abstract":"Measuring visual attention has become an important tool during product development. Attention maps are important qualitative visualizations to communicate results within the team and to stakeholders. We have developed a GPU-accelerated approach which allows for real-time generation of attention maps for 3D models that can, e.g., be used for on-the-fly visualizations of visual attention distributions and for the generation of heat-map textures for offline high-quality renderings. The presented approach is unique in that it works with monocular and binocular data, respects the depth of focus, can handle moving objects and is ready to be used for selective rendering.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"257 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132831628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Agata Marta Soccini, M. Marello, N. Balossino, V. Basso
Design and engineering of space vehicles involve several different disciplines, each of which have specific research methodologies and software tools. Interaction among the physical analysis tools, as well as collaboration among teams, is a delicate and difficult subject of collaborative engineering research topics. This paper presents a solution based on a new technical and interaction design. We encourage the development of Virtual Reality 3D applications to interface the tools that perform the quantitative physical analysis of given space vehicles, as this approach improves productivity and communication effectiveness and brings consisting benefits to the engineering and design collaborative processes. As an evaluation of this method, we report the design and implementation of two Virtual Reality Applications specifically developed for the European Space Agency's space vehicle IXV. This evaluation showed that users found the Virtual Reality interface easy to use and likely to be useful in their own work. The benefits underlined by this method brought to the acquirement of this approach as the new future standard methodology in pre-launch phases of space vehicles.
{"title":"‘IXV-trajectory’ and ‘IXV-asset’: Virtual reality applications for the aerothermodynamics analysis of IXV","authors":"Agata Marta Soccini, M. Marello, N. Balossino, V. Basso","doi":"10.1109/VR.2015.7223463","DOIUrl":"https://doi.org/10.1109/VR.2015.7223463","url":null,"abstract":"Design and engineering of space vehicles involve several different disciplines, each of which have specific research methodologies and software tools. Interaction among the physical analysis tools, as well as collaboration among teams, is a delicate and difficult subject of collaborative engineering research topics. This paper presents a solution based on a new technical and interaction design. We encourage the development of Virtual Reality 3D applications to interface the tools that perform the quantitative physical analysis of given space vehicles, as this approach improves productivity and communication effectiveness and brings consisting benefits to the engineering and design collaborative processes. As an evaluation of this method, we report the design and implementation of two Virtual Reality Applications specifically developed for the European Space Agency's space vehicle IXV. This evaluation showed that users found the Virtual Reality interface easy to use and likely to be useful in their own work. The benefits underlined by this method brought to the acquirement of this approach as the new future standard methodology in pre-launch phases of space vehicles.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114630153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we report on a pilot study we conducted, and on a study design, to examine the effects and applicability of rotation gain in CAVE-like virtual environments. The results of the study will give recommendations for the maximum levels of rotation gain that are reasonable in algorithms for enlarging the virtual field of regard or redirected walking.
{"title":"Effects and applicability of rotation gain in CAVE-like environments","authors":"Sebastian Freitag, B. Weyers, T. Kuhlen","doi":"10.1109/VR.2015.7223354","DOIUrl":"https://doi.org/10.1109/VR.2015.7223354","url":null,"abstract":"In this work, we report on a pilot study we conducted, and on a study design, to examine the effects and applicability of rotation gain in CAVE-like virtual environments. The results of the study will give recommendations for the maximum levels of rotation gain that are reasonable in algorithms for enlarging the virtual field of regard or redirected walking.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124795257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Is it their movements or appearance that helps us to identify a child as a child? In this work we created four video clips with a Virtual Character walking, but with different combinations of either child or adult animation applied on either a child or adult body. An experimental study was conducted with 53 participants who viewed all four videos in random orders. The results show that participants could easily identify the consistent video clips (child animation with child body, and adult animation with adult body). With the inconsistent video clips, both animation and body shape had an effect on participants' judgments. They also reported higher level of empathy, care, and feeling of protection towards the child character as compared to the adult character. Finally, compared to appearance, animation seems to be playing a bigger role in invoking participants' emotional responses.
{"title":"An experimental study on the virtual representation of children","authors":"Ranchida Khantong, Xueni Pan, M. Slater","doi":"10.1109/VR.2015.7223367","DOIUrl":"https://doi.org/10.1109/VR.2015.7223367","url":null,"abstract":"Is it their movements or appearance that helps us to identify a child as a child? In this work we created four video clips with a Virtual Character walking, but with different combinations of either child or adult animation applied on either a child or adult body. An experimental study was conducted with 53 participants who viewed all four videos in random orders. The results show that participants could easily identify the consistent video clips (child animation with child body, and adult animation with adult body). With the inconsistent video clips, both animation and body shape had an effect on participants' judgments. They also reported higher level of empathy, care, and feeling of protection towards the child character as compared to the adult character. Finally, compared to appearance, animation seems to be playing a bigger role in invoking participants' emotional responses.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125811314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This experiment aimed to investigate whether a user controlling a full body avatar via real time motion tracking in an immersive virtual reality setup, would estimate the weight of the virtual avatar differently if the footstep sounds are manipulated using three different audio filter settings. The visual appearance of the avatar was available in two sizes. The subjects performed six walks with each audio configuration active once over two ground types. After completing each walk, the participants were asked to estimate the weight of the virtual avatar and the suitability of the audio feedback. The results indicate that the filters amplifying the two lower center frequencies altered the subjects estimates of the weight of the avatar body towards being heavier than when compared with the filter with the higher center frequency. There were no significant differences between the weight estimates of the two groups using the different avatar bodies.
{"title":"Self-characterstics and sound in immersive virtual reality — Estimating avatar weight from footstep sounds","authors":"Erik Sikström, Amalia de Götzen, S. Serafin","doi":"10.1109/VR.2015.7223406","DOIUrl":"https://doi.org/10.1109/VR.2015.7223406","url":null,"abstract":"This experiment aimed to investigate whether a user controlling a full body avatar via real time motion tracking in an immersive virtual reality setup, would estimate the weight of the virtual avatar differently if the footstep sounds are manipulated using three different audio filter settings. The visual appearance of the avatar was available in two sizes. The subjects performed six walks with each audio configuration active once over two ground types. After completing each walk, the participants were asked to estimate the weight of the virtual avatar and the suitability of the audio feedback. The results indicate that the filters amplifying the two lower center frequencies altered the subjects estimates of the weight of the avatar body towards being heavier than when compared with the filter with the higher center frequency. There were no significant differences between the weight estimates of the two groups using the different avatar bodies.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123245343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shohei Ukawa, Tatsuya Shinada, M. Hashimoto, Yuichi Itoh, T. Onoye
This paper proposes a 3D node localization method that uses cross-entropy method for the 3D modeling system. The proposed localization method statistically estimates the most probable positions overcoming measurement errors through iterative sample generation and evaluation. The generated samples are evaluated in parallel, and then a significant speedup can be obtained. We also demonstrate that the iterative sample generation and evaluation performed in parallel are highly compatible with interactive node movement.
{"title":"3D node localization from node-to-node distance information using cross-entropy method","authors":"Shohei Ukawa, Tatsuya Shinada, M. Hashimoto, Yuichi Itoh, T. Onoye","doi":"10.1109/VR.2015.7223416","DOIUrl":"https://doi.org/10.1109/VR.2015.7223416","url":null,"abstract":"This paper proposes a 3D node localization method that uses cross-entropy method for the 3D modeling system. The proposed localization method statistically estimates the most probable positions overcoming measurement errors through iterative sample generation and evaluation. The generated samples are evaluated in parallel, and then a significant speedup can be obtained. We also demonstrate that the iterative sample generation and evaluation performed in parallel are highly compatible with interactive node movement.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125308329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Ochi, Y. Kunita, A. Kameda, Akira Kojima, Shinnosuke Iwaki
NTT Media Intelligence Laboratories and DWANGO Co., Ltd. have jointly developed a virtual reality system that enables users to have an immersive experience visiting a remote site. This system makes it possible for users to watch video content wherever they want to watch it by using interactive streaming technology that selectively streams the user's watching section at a high bitrate in a limited network bandwidth. Applying this technology to omnidirectional video allows users to experience feelings of presence through the use of an intuitive head mount display. The system has also been released on a commercial platform and successfully streamed a real-time event. A demonstration is planned in which the details of the system and the streaming service results obtained with it will be presented.
{"title":"Live streaming system for omnidirectional video","authors":"D. Ochi, Y. Kunita, A. Kameda, Akira Kojima, Shinnosuke Iwaki","doi":"10.1109/VR.2015.7223439","DOIUrl":"https://doi.org/10.1109/VR.2015.7223439","url":null,"abstract":"NTT Media Intelligence Laboratories and DWANGO Co., Ltd. have jointly developed a virtual reality system that enables users to have an immersive experience visiting a remote site. This system makes it possible for users to watch video content wherever they want to watch it by using interactive streaming technology that selectively streams the user's watching section at a high bitrate in a limited network bandwidth. Applying this technology to omnidirectional video allows users to experience feelings of presence through the use of an intuitive head mount display. The system has also been released on a commercial platform and successfully streamed a real-time event. A demonstration is planned in which the details of the system and the streaming service results obtained with it will be presented.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126950157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented reality allows to add virtual object in real scene. It has an increasing interest last years since mobile device becomes performant and cheap. The augmented reality is used in different domains, like maintenance, training, education, entertainment or medicine. The demonstrator we show is focused on maintenance operations. A step by step process is presented to the operator in order to maintain an element of a system. Based on this demonstration, we will explain the modelling we propose allowing describing an entire maintenance process with augmented reality. Indeed it is still difficult creating augmented reality application without computer programming skills. The proposed model will allow to create an authoring tool - or to plug to an existing one - in order to create augmented reality process without deep computer programming skills.
{"title":"Augmented reality maintenance demonstrator and associated modelling","authors":"Vincent Havard, D. Baudry, A. Louis, B. Mazari","doi":"10.1109/VR.2015.7223429","DOIUrl":"https://doi.org/10.1109/VR.2015.7223429","url":null,"abstract":"Augmented reality allows to add virtual object in real scene. It has an increasing interest last years since mobile device becomes performant and cheap. The augmented reality is used in different domains, like maintenance, training, education, entertainment or medicine. The demonstrator we show is focused on maintenance operations. A step by step process is presented to the operator in order to maintain an element of a system. Based on this demonstration, we will explain the modelling we propose allowing describing an entire maintenance process with augmented reality. Indeed it is still difficult creating augmented reality application without computer programming skills. The proposed model will allow to create an authoring tool - or to plug to an existing one - in order to create augmented reality process without deep computer programming skills.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116548707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Richir, Samuel Pineau, É. Monacelli, Frédéric Goncalves, Benjamin Malafosse, C. Dumas, Alain Schmid, J. Perret
Rewarded at Laval Virtual 2014, the AccesSim project aims to develop a wheelchair simulator based on Virtual Reality (VR) and a dynamic force-feedback platform, which allows to experience and to evaluate the accessibility in complex urban or property environment. In order to address this issue, the dynamic force-feedback platform should provide haptic and vestibular feedback to various user profiles: from town-planners to wheelchair users. The platform needs to be modular and adjustable to each of these profiles. This article focuses on the dynamic force-feedback platform and specifically on the force-feedback systems used.
{"title":"Design of portable and accessible platform in charge of wheelchair feedback immersion","authors":"S. Richir, Samuel Pineau, É. Monacelli, Frédéric Goncalves, Benjamin Malafosse, C. Dumas, Alain Schmid, J. Perret","doi":"10.1109/VR.2015.7223459","DOIUrl":"https://doi.org/10.1109/VR.2015.7223459","url":null,"abstract":"Rewarded at Laval Virtual 2014, the AccesSim project aims to develop a wheelchair simulator based on Virtual Reality (VR) and a dynamic force-feedback platform, which allows to experience and to evaluate the accessibility in complex urban or property environment. In order to address this issue, the dynamic force-feedback platform should provide haptic and vestibular feedback to various user profiles: from town-planners to wheelchair users. The platform needs to be modular and adjustable to each of these profiles. This article focuses on the dynamic force-feedback platform and specifically on the force-feedback systems used.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114425471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}