Lane Phillips, B. Ries, Michael Kaeding, V. Interrante
Non-photorealistically rendered (NPR) immersive virtual environments (IVEs) can facilitate conceptual design in architecture by enabling preliminary design sketches to be previewed and experienced at full scale, from a first-person perspective. However, it is critical to ensure the accurate spatial perception of the represented information, and many studies have shown that people typically underestimate distances in most IVEs, regardless of rendering style. In previous work we have found that while people tend to judge distances more accurately in an IVE that is a high-fidelity replica of their concurrently occupied real environment than in an IVE that it is a photorealistic representation of a real place that they've never been to, significant distance estimation errors re-emerge when the replica environment is represented in a NPR style. We have also previously found that distance estimation accuracy can be improved, in photo-realistically rendered novel virtual environments, when people are given a fully tracked, high fidelity first person avatar self-embodiment. In this paper we report the results of an experiment that seeks to determine whether providing users with a high-fidelity avatar self-embodiment in a NPR virtual replica environment will enable them to perceive the 3D spatial layout of that environment more accurately. We find that users who are given a first person avatar in an NPR replica environment judge distances more accurately than do users who experience the NPR replica room without an embodiment, but not as accurately as users whose distance judgments are made in a photorealistically rendered virtual replica room. Our results provide a partial solution to the problem of facilitating accurate distance perception in NPR virtual environments, while supporting and expanding the scope of previous findings that giving people a realistic avatar self-embodiment in an IVE can help them to interpret what they see through an HMD in a way that is more similar to how they would interpret a corresponding visual stimulus in the real world.
{"title":"Avatar self-embodiment enhances distance perception accuracy in non-photorealistic immersive virtual environments","authors":"Lane Phillips, B. Ries, Michael Kaeding, V. Interrante","doi":"10.1109/VR.2010.5444802","DOIUrl":"https://doi.org/10.1109/VR.2010.5444802","url":null,"abstract":"Non-photorealistically rendered (NPR) immersive virtual environments (IVEs) can facilitate conceptual design in architecture by enabling preliminary design sketches to be previewed and experienced at full scale, from a first-person perspective. However, it is critical to ensure the accurate spatial perception of the represented information, and many studies have shown that people typically underestimate distances in most IVEs, regardless of rendering style. In previous work we have found that while people tend to judge distances more accurately in an IVE that is a high-fidelity replica of their concurrently occupied real environment than in an IVE that it is a photorealistic representation of a real place that they've never been to, significant distance estimation errors re-emerge when the replica environment is represented in a NPR style. We have also previously found that distance estimation accuracy can be improved, in photo-realistically rendered novel virtual environments, when people are given a fully tracked, high fidelity first person avatar self-embodiment. In this paper we report the results of an experiment that seeks to determine whether providing users with a high-fidelity avatar self-embodiment in a NPR virtual replica environment will enable them to perceive the 3D spatial layout of that environment more accurately. We find that users who are given a first person avatar in an NPR replica environment judge distances more accurately than do users who experience the NPR replica room without an embodiment, but not as accurately as users whose distance judgments are made in a photorealistically rendered virtual replica room. Our results provide a partial solution to the problem of facilitating accurate distance perception in NPR virtual environments, while supporting and expanding the scope of previous findings that giving people a realistic avatar self-embodiment in an IVE can help them to interpret what they see through an HMD in a way that is more similar to how they would interpret a corresponding visual stimulus in the real world.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126609127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A collaborative virtual environment (CVE) allows remote users to access and modify shared data through networks, such as the Internet. However, when the users are connected via the Internet, the network latency problem may become significant and affect the performance of user interactions. Existing works to address the network latency problem mainly focus on developing motion prediction methods that appear statistically accurate for certain applications. However, it is often not known how reliable they are in a CVE. In this work, we study the sources of error introduced by a motion predictor and propose to address the errors by estimating the error bounds of each prediction made by the motion predictor. Without loss of generality, we discuss how we may estimate the upper and lower error bounds based on a particular motion predictor. Finally, we evaluate the effectiveness of our method extensively through a number of experiments and show the effectiveness of using the estimated error bound in an area-based visibility culling algorithm for DVE navigation.
{"title":"On error bound estimation for motion prediction","authors":"Rynson W. H. Lau, Kenneth Lee","doi":"10.1109/VR.2010.5444795","DOIUrl":"https://doi.org/10.1109/VR.2010.5444795","url":null,"abstract":"A collaborative virtual environment (CVE) allows remote users to access and modify shared data through networks, such as the Internet. However, when the users are connected via the Internet, the network latency problem may become significant and affect the performance of user interactions. Existing works to address the network latency problem mainly focus on developing motion prediction methods that appear statistically accurate for certain applications. However, it is often not known how reliable they are in a CVE. In this work, we study the sources of error introduced by a motion predictor and propose to address the errors by estimating the error bounds of each prediction made by the motion predictor. Without loss of generality, we discuss how we may estimate the upper and lower error bounds based on a particular motion predictor. Finally, we evaluate the effectiveness of our method extensively through a number of experiments and show the effectiveness of using the estimated error bound in an area-based visibility culling algorithm for DVE navigation.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114425637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frank Steinicke, G. Bruder, K. Hinrichs, P. Willemsen
In visual perception, change blindness describes the phenomenon that persons viewing a visual scene may apparently fail to detect significant changes in that scene. These phenomena have been observed in both computer generated imagery and real-world scenes. Several studies have demonstrated that change blindness effects occur primarily during visual disruptions such as blinks or saccadic eye movements. However, until now the influence of stereoscopic vision on change blindness has not been studied thoroughly in the context of visual perception research. In this paper we introduce change blindness techniques for stereoscopic projection systems, providing the ability to substantially modify a virtual scene in a manner that is difficult for observers to perceive. We evaluate techniques for passive and active stereoscopic viewing and compare the results to those of monoscopic viewing conditions. For stereoscopic viewing conditions, we found that change blindness phenomena occur with the same magnitude as in monoscopic viewing conditions. Furthermore, we have evaluated the potential of the presented techniques for allowing abrupt, and yet significant, changes of a stereoscopically displayed virtual reality environment.
{"title":"Change blindness phenomena for stereoscopic projection systems","authors":"Frank Steinicke, G. Bruder, K. Hinrichs, P. Willemsen","doi":"10.1109/VR.2010.5444790","DOIUrl":"https://doi.org/10.1109/VR.2010.5444790","url":null,"abstract":"In visual perception, change blindness describes the phenomenon that persons viewing a visual scene may apparently fail to detect significant changes in that scene. These phenomena have been observed in both computer generated imagery and real-world scenes. Several studies have demonstrated that change blindness effects occur primarily during visual disruptions such as blinks or saccadic eye movements. However, until now the influence of stereoscopic vision on change blindness has not been studied thoroughly in the context of visual perception research. In this paper we introduce change blindness techniques for stereoscopic projection systems, providing the ability to substantially modify a virtual scene in a manner that is difficult for observers to perceive. We evaluate techniques for passive and active stereoscopic viewing and compare the results to those of monoscopic viewing conditions. For stereoscopic viewing conditions, we found that change blindness phenomena occur with the same magnitude as in monoscopic viewing conditions. Furthermore, we have evaluated the potential of the presented techniques for allowing abrupt, and yet significant, changes of a stereoscopically displayed virtual reality environment.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117099304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a systematic study on the recognition of 3D gestures using spatially convenient input devices. Specifically, we examine the linear acceleration-sensing Nintendo Wii Remote coupled with the angular velocity-sensing Nintendo Wii MotionPlus. For the study, we created a 3D gesture database, collecting data on 25 distinct gestures totalling 8500 gestures samples. Our experiment explores how the number of gestures and the amount of gestures samples used to train two commonly used machine learning algorithms, a linear and AdaBoost classifier, affect overall recognition accuracy. We examined these gesture recognition algorithms with user dependent and user independent training approaches and explored the affect of using the Wii Remote with and without the Wii MotionPlus attachment. Our results show that in the user dependent case, both the Ad-aBoost and linear classification algorithms can recognize up to 25 gestures at over 90% accuracy, with 15 training samples per gesture, and up to 20 gestures at over 90% accuracy, with only five training samples per gesture. In particular, all 25 gestures could be recognized at over 99% accuracy with the linear classifier using 15 training samples per gesture, with the Wii Remote coupled with the Wii MotionPlus. In addition, both algorithms can recognize up to nine gestures at over 90% accuracy using a user independent training database with 100 samples per gesture. The Wii MotionPlus attachment played a significant role in improving accuracy in both the user dependent and independent cases.
{"title":"Breaking the status quo: Improving 3D gesture recognition with spatially convenient input devices","authors":"Michael Hoffman, Paul Varcholik, J. Laviola","doi":"10.1109/VR.2010.5444813","DOIUrl":"https://doi.org/10.1109/VR.2010.5444813","url":null,"abstract":"We present a systematic study on the recognition of 3D gestures using spatially convenient input devices. Specifically, we examine the linear acceleration-sensing Nintendo Wii Remote coupled with the angular velocity-sensing Nintendo Wii MotionPlus. For the study, we created a 3D gesture database, collecting data on 25 distinct gestures totalling 8500 gestures samples. Our experiment explores how the number of gestures and the amount of gestures samples used to train two commonly used machine learning algorithms, a linear and AdaBoost classifier, affect overall recognition accuracy. We examined these gesture recognition algorithms with user dependent and user independent training approaches and explored the affect of using the Wii Remote with and without the Wii MotionPlus attachment. Our results show that in the user dependent case, both the Ad-aBoost and linear classification algorithms can recognize up to 25 gestures at over 90% accuracy, with 15 training samples per gesture, and up to 20 gestures at over 90% accuracy, with only five training samples per gesture. In particular, all 25 gestures could be recognized at over 99% accuracy with the linear classifier using 15 training samples per gesture, with the Wii Remote coupled with the Wii MotionPlus. In addition, both algorithms can recognize up to nine gestures at over 90% accuracy using a user independent training database with 100 samples per gesture. The Wii MotionPlus attachment played a significant role in improving accuracy in both the user dependent and independent cases.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121182468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Perez-Gutierrez, Diana Marcela Martinez, Oscar Ernesto Rojas
The use of surgical simulators with virtual reality allows surgeons to practice diverse types of procedures for minimizing any risk on patients during the real surgery. This paper describes a prototype for a Minimal Invasive Surgery (MIS) simulator with haptic feedback applied to an endoscopic endonasal surgery. The research is divided in three parts: the simulation of a rigid endoscope device with four degrees of freedom (pitch, yaw, roll and dolly), a simplified model of the nasal tissue for fast haptic rendering and the integration with a virtual reality simulation system with collision detection. The main contribution of the paper is the rigid endoscope model using a simple lever acting as a force transformer pivoting in the nostril. The obtained results show a suitable response of the system for an interactive simulation.
{"title":"Endoscopic endonasal haptic surgery simulator prototype: A rigid endoscope model","authors":"B. Perez-Gutierrez, Diana Marcela Martinez, Oscar Ernesto Rojas","doi":"10.1109/VR.2010.5444756","DOIUrl":"https://doi.org/10.1109/VR.2010.5444756","url":null,"abstract":"The use of surgical simulators with virtual reality allows surgeons to practice diverse types of procedures for minimizing any risk on patients during the real surgery. This paper describes a prototype for a Minimal Invasive Surgery (MIS) simulator with haptic feedback applied to an endoscopic endonasal surgery. The research is divided in three parts: the simulation of a rigid endoscope device with four degrees of freedom (pitch, yaw, roll and dolly), a simplified model of the nasal tissue for fast haptic rendering and the integration with a virtual reality simulation system with collision detection. The main contribution of the paper is the rigid endoscope model using a simple lever acting as a force transformer pivoting in the nostril. The obtained results show a suitable response of the system for an interactive simulation.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127663025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roberto C. Cavalcante Vieira, C. Vidal, J. B. C. Neto
Nowadays, applications of virtual reality (VR) and computer games use human characters models with ever-increasing sophistication. Additional challenges are posed by applications, such as life-simulation computer games (The Sims, Spore, etc.), internet-based virtual worlds (Second Life) and animation movies, that require simulation of kinship and interaction between isolated populations with well defined ethnic characteristics. The main difficulty in those situations is to generate models automatically, which are physically similar to a given population or family. In this paper, human reproduction is mimicked to produce character models, which inherit genetic characteristics from their ancestors. Unlike morphing techniques, in our method, it is possible that a genetic characteristic from an ancestor be manifested only after a few generations.
{"title":"Simulation of genetic inheritance in the generation of virtual characters","authors":"Roberto C. Cavalcante Vieira, C. Vidal, J. B. C. Neto","doi":"10.1109/VR.2010.5444803","DOIUrl":"https://doi.org/10.1109/VR.2010.5444803","url":null,"abstract":"Nowadays, applications of virtual reality (VR) and computer games use human characters models with ever-increasing sophistication. Additional challenges are posed by applications, such as life-simulation computer games (The Sims, Spore, etc.), internet-based virtual worlds (Second Life) and animation movies, that require simulation of kinship and interaction between isolated populations with well defined ethnic characteristics. The main difficulty in those situations is to generate models automatically, which are physically similar to a given population or family. In this paper, human reproduction is mimicked to produce character models, which inherit genetic characteristics from their ancestors. Unlike morphing techniques, in our method, it is possible that a genetic characteristic from an ancestor be manifested only after a few generations.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127430333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an analysis of four orientation tracking systems used for construction of environment maps. We discuss the analysis necessary to determine the robustness of tracking systems in general. Due to the difficulty inherent in collecting user evaluation data, we then propose a metric which can be used to obtain a relative estimate of these values. The proposed metric will still require a set of input videos with an associated distance to ground truth, but not an additional user evaluation.
{"title":"Evaluation of tracking robustness in real time panorama acquisition","authors":"Christopher Coffin, Sehwan Kim, Tobias Höllerer","doi":"10.1109/VR.2010.5444774","DOIUrl":"https://doi.org/10.1109/VR.2010.5444774","url":null,"abstract":"We present an analysis of four orientation tracking systems used for construction of environment maps. We discuss the analysis necessary to determine the robustness of tracking systems in general. Due to the difficulty inherent in collecting user evaluation data, we then propose a metric which can be used to obtain a relative estimate of these values. The proposed metric will still require a set of input videos with an associated distance to ground truth, but not an additional user evaluation.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130767303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
New immersive display systems are emerging, providing new platforms to present 3D data and virtual worlds [7][12]. However, little effort has been spent evaluating these systems, or providing guiding design principles from a human factors point of view. The objective of the proposed work is to compare performance and user interaction across two immersive displays. The goal is to compare a low-cost, multi-screen, spatially immersive visualization facility to a more expensive system. The low cost system is designed using off-the-shelf components and constructed by arranging LCD displays in a tiled semi-circle [7]. The more expensive system is a semi-rigid, rear projected, continuous curved screen, designed by Rockwell Collins [8]. Our hypothesis is that low-cost systems present a perceptually equivalent visual experience, despite image seams introduced by the connecting display screens. Psychophysical experimentation will compare the two systems through human judgements based on performance. The outcomes, through knowledge gained from the experiments, should make availability of immersive visualization systems possible in areas currently unable to afford such systems or justify the expense.
{"title":"The effect of tiled display on performance in multi-screen immersive virtual environments","authors":"A. Agana, Megha Davalath, Ann McNamara, F. Parke","doi":"10.1109/VR.2010.5444781","DOIUrl":"https://doi.org/10.1109/VR.2010.5444781","url":null,"abstract":"New immersive display systems are emerging, providing new platforms to present 3D data and virtual worlds [7][12]. However, little effort has been spent evaluating these systems, or providing guiding design principles from a human factors point of view. The objective of the proposed work is to compare performance and user interaction across two immersive displays. The goal is to compare a low-cost, multi-screen, spatially immersive visualization facility to a more expensive system. The low cost system is designed using off-the-shelf components and constructed by arranging LCD displays in a tiled semi-circle [7]. The more expensive system is a semi-rigid, rear projected, continuous curved screen, designed by Rockwell Collins [8]. Our hypothesis is that low-cost systems present a perceptually equivalent visual experience, despite image seams introduced by the connecting display screens. Psychophysical experimentation will compare the two systems through human judgements based on performance. The outcomes, through knowledge gained from the experiments, should make availability of immersive visualization systems possible in areas currently unable to afford such systems or justify the expense.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122198164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a prototype mobile augmented reality client addition to the ¿Image Space¿ mixed reality media sharing service. We have explored how the real world aligned "mirror world" content from that service can be interacted with in-situ and identified two different use scenarios - geospatial media sharing and social connection. Since both the existing web based mirror world modality and the additional mobile augmented reality modality intersect at the common data, the combined service is as an example of how mirror worlds can be used to bridge the real and the virtual, and allow for interaction from either side of the reality continuum.
{"title":"An augmented reality view on mirror world content, with Image Space","authors":"David J. Murphy, M. Kahari, Ville-Veikko Mattila","doi":"10.1109/VR.2010.5444761","DOIUrl":"https://doi.org/10.1109/VR.2010.5444761","url":null,"abstract":"We present a prototype mobile augmented reality client addition to the ¿Image Space¿ mixed reality media sharing service. We have explored how the real world aligned \"mirror world\" content from that service can be interacted with in-situ and identified two different use scenarios - geospatial media sharing and social connection. Since both the existing web based mirror world modality and the additional mobile augmented reality modality intersect at the common data, the combined service is as an example of how mirror worlds can be used to bridge the real and the virtual, and allow for interaction from either side of the reality continuum.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"1 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113932189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Tsirlin, E. Dupierrix, S. Chokron, T. Ohlmann, S. Coquillart
In the present paper, we describe a virtual reality application developed for the study of unilateral spatial neglect, a post-stroke neurological disorder that results in failure to respond to stimuli presented contralaterally to the damaged hemisphere. Recently, it has been proposed that patients with unilateral spatial neglect experience sensorimotor decorrelation in the affected space. Consequently, it is possible that since the sensorimotor experience in the affected space is perturbed, patients avoid this space, which results in neglect behavior. Here, we evaluate this hypothesis using a virtual reality application built on the base of the Stringed Haptic Workbench, a large-scale visuo-haptic system. The results provide support for the hypothesis and demonstrate that the proposed application is suitable for the envisioned goal.
{"title":"Multimodal virtual reality application for the study of unilateral spatial neglect","authors":"I. Tsirlin, E. Dupierrix, S. Chokron, T. Ohlmann, S. Coquillart","doi":"10.1109/VR.2010.5444800","DOIUrl":"https://doi.org/10.1109/VR.2010.5444800","url":null,"abstract":"In the present paper, we describe a virtual reality application developed for the study of unilateral spatial neglect, a post-stroke neurological disorder that results in failure to respond to stimuli presented contralaterally to the damaged hemisphere. Recently, it has been proposed that patients with unilateral spatial neglect experience sensorimotor decorrelation in the affected space. Consequently, it is possible that since the sensorimotor experience in the affected space is perturbed, patients avoid this space, which results in neglect behavior. Here, we evaluate this hypothesis using a virtual reality application built on the base of the Stringed Haptic Workbench, a large-scale visuo-haptic system. The results provide support for the hypothesis and demonstrate that the proposed application is suitable for the envisioned goal.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"15 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114029042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}