A frequently observed problem in virtual environments is the underestimation of egocentric depth. This problem has been described numerous times and with widely varying degrees of severity. Though there has been considerable progress made in modifying observer behavior to compensate for these misperceptions, the question of why these errors exist is still an open issue. The study detailed in this document presents the preliminary findings of a large, between-subjects experiment (N=98) that attempts to identify and quantify the source of a pattern of adaptation and improved accuracy in the absence of explicit feedback found in Jones et al. [1].
{"title":"Peripheral visual information and its effect on the perception of egocentric depth in virtual and augmented environments","authors":"J. A. Jones, J. Swan, Gurjot Singh, S. Ellis","doi":"10.1109/VR.2011.5759475","DOIUrl":"https://doi.org/10.1109/VR.2011.5759475","url":null,"abstract":"A frequently observed problem in virtual environments is the underestimation of egocentric depth. This problem has been described numerous times and with widely varying degrees of severity. Though there has been considerable progress made in modifying observer behavior to compensate for these misperceptions, the question of why these errors exist is still an open issue. The study detailed in this document presents the preliminary findings of a large, between-subjects experiment (N=98) that attempts to identify and quantify the source of a pattern of adaptation and improved accuracy in the absence of explicit feedback found in Jones et al. [1].","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133108462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Richard Skarbez, Aaron Kotranza, F. Brooks, Benjamin C. Lok, M. Whitton
We present a new method for evaluating user experience in interactions with virtual humans (VHs). We code the conversational errors made by the VH. These errors, in addition to the duration of the interaction and the numbers of statements made by the participant and the VH, provide objective, quantitative data about the virtual social interaction. We applied this method to a set of previously collected interactions between medical students and VH patients and present preliminary results. The error metrics do not correlate with traditional measures of the quality of a virtual experience, e.g. presence and copresence questionnaires. The error metrics were significantly correlated with scores on the Maastricht Assessment of Simulated Patients (MaSP), a scenario-appropriate measure of simulation quality, suggesting further investigation is warranted.
{"title":"An initial exploration of conversational errors as a novel method for evaluating virtual human experiences","authors":"Richard Skarbez, Aaron Kotranza, F. Brooks, Benjamin C. Lok, M. Whitton","doi":"10.1109/VR.2011.5759489","DOIUrl":"https://doi.org/10.1109/VR.2011.5759489","url":null,"abstract":"We present a new method for evaluating user experience in interactions with virtual humans (VHs). We code the conversational errors made by the VH. These errors, in addition to the duration of the interaction and the numbers of statements made by the participant and the VH, provide objective, quantitative data about the virtual social interaction. We applied this method to a set of previously collected interactions between medical students and VH patients and present preliminary results. The error metrics do not correlate with traditional measures of the quality of a virtual experience, e.g. presence and copresence questionnaires. The error metrics were significantly correlated with scores on the Maastricht Assessment of Simulated Patients (MaSP), a scenario-appropriate measure of simulation quality, suggesting further investigation is warranted.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129032000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual Reality (VR) immersive environments are becoming more popular and of less cost, hence, VR labs are becoming a main part in any research that depends on visualization. This introduced the need to port many 3D desktop visualization applications to VR. Porting application GUIs can be a problem since original GUIs are 2D by nature and using them directly can obscure a large area of 3D viewport and spoil the immersive experience. On the other hand, rewriting a 3D GUI can be a time consuming and tedious task. In this work, we introduce a technique to embed 2D GUIs into 3D Virtual Environments (VE). Our approach uses existing 2D GUIs that can be immersed into the VE allowing rapid GUI development for VR applications. It can also be used for porting 3D desktop applications without rewriting the GUI code. Further, it enables embedding many window-based desktop applications into the VE, creating rich VEs where users can work with multiple applications simultaneously.
{"title":"VEGI: Virtual Environment GUI Immersion system","authors":"Mohammed Elfarargy, M. Nagi, N. Adly","doi":"10.1109/VR.2011.5759470","DOIUrl":"https://doi.org/10.1109/VR.2011.5759470","url":null,"abstract":"Virtual Reality (VR) immersive environments are becoming more popular and of less cost, hence, VR labs are becoming a main part in any research that depends on visualization. This introduced the need to port many 3D desktop visualization applications to VR. Porting application GUIs can be a problem since original GUIs are 2D by nature and using them directly can obscure a large area of 3D viewport and spoil the immersive experience. On the other hand, rewriting a 3D GUI can be a time consuming and tedious task. In this work, we introduce a technique to embed 2D GUIs into 3D Virtual Environments (VE). Our approach uses existing 2D GUIs that can be immersed into the VE allowing rapid GUI development for VR applications. It can also be used for porting 3D desktop applications without rewriting the GUI code. Further, it enables embedding many window-based desktop applications into the VE, creating rich VEs where users can work with multiple applications simultaneously.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124736639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evan A. Suma, Seth Clark, D. Krum, Samantha L. Finkelstein, M. Bolas, Z. Wartell
We present change blindness redirection, a novel technique for allowing the user to walk through an immersive virtual environment that is considerably larger than the available physical workspace. In contrast to previous redirection techniques, this approach, based on a dynamic environment model, does not introduce any visual-vestibular conflicts from manipulating the mapping between physical and virtual motions, nor does it require breaking presence to stop and explicitly reorient the user. We conducted two user studies to evaluate the effectiveness of the change blindness illusion when exploring a virtual environment that was an order of magnitude larger than the physical walking space. Despite the dynamically changing environment, participants were able to draw coherent sketch maps of the environment structure, and pointing task results indicated that they were able to maintain their spatial orientation within the virtual world. Only one out of 77 participants across both both studies definitively noticed that a scene change had occurred, suggesting that change blindness redirection provides a remarkably compelling illusion. Secondary findings revealed that a wide field-of-view increases pointing accuracy and that experienced gamers reported greater sense of presence than those with little or no experience with 3D video games.
{"title":"Leveraging change blindness for redirection in virtual environments","authors":"Evan A. Suma, Seth Clark, D. Krum, Samantha L. Finkelstein, M. Bolas, Z. Wartell","doi":"10.1109/VR.2011.5759455","DOIUrl":"https://doi.org/10.1109/VR.2011.5759455","url":null,"abstract":"We present change blindness redirection, a novel technique for allowing the user to walk through an immersive virtual environment that is considerably larger than the available physical workspace. In contrast to previous redirection techniques, this approach, based on a dynamic environment model, does not introduce any visual-vestibular conflicts from manipulating the mapping between physical and virtual motions, nor does it require breaking presence to stop and explicitly reorient the user. We conducted two user studies to evaluate the effectiveness of the change blindness illusion when exploring a virtual environment that was an order of magnitude larger than the physical walking space. Despite the dynamically changing environment, participants were able to draw coherent sketch maps of the environment structure, and pointing task results indicated that they were able to maintain their spatial orientation within the virtual world. Only one out of 77 participants across both both studies definitively noticed that a scene change had occurred, suggesting that change blindness redirection provides a remarkably compelling illusion. Secondary findings revealed that a wide field-of-view increases pointing accuracy and that experienced gamers reported greater sense of presence than those with little or no experience with 3D video games.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"785 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117032207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Mokhtari, Eric Boivin, D. Laurendeau, S. Comtois, D. Ouellet, Julien-Charles Levesque, Etienne Ouellet
This paper presents an immersive Human-centric/built virtual work cell for analyzing complex situations dynamically. This environment is supported by a custom open architecture, is composed of objects of complementary nature reflecting the level of Human understanding. Furthermore, it is controlled by an intuitive 3D bimanual gestural interface using data gloves.
{"title":"IMAGE — Complex situation understanding: An immersive concept development","authors":"M. Mokhtari, Eric Boivin, D. Laurendeau, S. Comtois, D. Ouellet, Julien-Charles Levesque, Etienne Ouellet","doi":"10.1109/VR.2011.5759482","DOIUrl":"https://doi.org/10.1109/VR.2011.5759482","url":null,"abstract":"This paper presents an immersive Human-centric/built virtual work cell for analyzing complex situations dynamically. This environment is supported by a custom open architecture, is composed of objects of complementary nature reflecting the level of Human understanding. Furthermore, it is controlled by an intuitive 3D bimanual gestural interface using data gloves.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128682698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jae Yeol Lee, Minseok Kim, Dongwoo Seo, Chil-Woo Lee, Jae Sung Kim, Sang Min Lee
We present a new way of dual interactions between multi-display and smartphone for collaborative design review and sharing among multi-users. It provides individual and cooperative interactions in immersive and non-immersive environments to decouple private and public spaces between multi-display and smartphone. Thus, it can provide different views among multiusers and make them communicate with each other more adaptively according to the shared contents. We will show implementation results by demonstrating digital product review and multimedia interactions among multi-users.
{"title":"Dual interactions between multi-display and smartphone for collaborative design and sharing","authors":"Jae Yeol Lee, Minseok Kim, Dongwoo Seo, Chil-Woo Lee, Jae Sung Kim, Sang Min Lee","doi":"10.1109/VR.2011.5759478","DOIUrl":"https://doi.org/10.1109/VR.2011.5759478","url":null,"abstract":"We present a new way of dual interactions between multi-display and smartphone for collaborative design review and sharing among multi-users. It provides individual and cooperative interactions in immersive and non-immersive environments to decouple private and public spaces between multi-display and smartphone. Thus, it can provide different views among multiusers and make them communicate with each other more adaptively according to the shared contents. We will show implementation results by demonstrating digital product review and multimedia interactions among multi-users.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"80 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126021330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Subhashini Ganapathy, Glen J. Anderson, I. Kozintsev
This poster will present findings from a study of a shopping assistant prototype with simulated augmented reality information. The goal of the study was to find out the acceptable level of delay in presentation of augmented information and the acceptable rate of error of the information presented. Twelve participants interacted with a Samsung Omnia™ smartphone that presented a wine shopping scenario under several levels of delay in showing product information. Participants indicated their willingness to wait for each delay they experienced. Participants also answered a survey about which types of products they would want a shopping assistant application to assist with.
{"title":"MAR shopping assistant usage: Delay, error, and utility","authors":"Subhashini Ganapathy, Glen J. Anderson, I. Kozintsev","doi":"10.1109/VR.2011.5759471","DOIUrl":"https://doi.org/10.1109/VR.2011.5759471","url":null,"abstract":"This poster will present findings from a study of a shopping assistant prototype with simulated augmented reality information. The goal of the study was to find out the acceptable level of delay in presentation of augmented information and the acceptable rate of error of the information presented. Twelve participants interacted with a Samsung Omnia™ smartphone that presented a wine shopping scenario under several levels of delay in showing product information. Participants indicated their willingness to wait for each delay they experienced. Participants also answered a survey about which types of products they would want a shopping assistant application to assist with.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123708592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Masaaki Miyaura, Takuji Narumi, Kunihiro Nishimura, T. Tanikawa, M. Hirose
This study tries to increase the effects of odors by controlling the odor emissions. We developed an olfactory feedback system to maintain user's concentration effectively based on biological information. The system presumes the level of user's concentration based on an electrocardiogram analysis. The analysis is carried out using variance of R-R intervals and frequency spectra. According to the presumed concentration level, the system presents an odor with an awakening effect to maintain user's concentration. We performed experiments in which subjects did simple tasks with this system, changing the method of controlling odor emissions. We compared the results of each method and evaluated the effectiveness of our olfactory feedback system and the methods of controlling odor emissions. The comparison of the results showed that presenting an odor when a user loses his/her concentration is effective to decrease errors of addition tasks.
{"title":"Olfactory feedback system to improve the concentration level based on biological information","authors":"Masaaki Miyaura, Takuji Narumi, Kunihiro Nishimura, T. Tanikawa, M. Hirose","doi":"10.1109/VR.2011.5759452","DOIUrl":"https://doi.org/10.1109/VR.2011.5759452","url":null,"abstract":"This study tries to increase the effects of odors by controlling the odor emissions. We developed an olfactory feedback system to maintain user's concentration effectively based on biological information. The system presumes the level of user's concentration based on an electrocardiogram analysis. The analysis is carried out using variance of R-R intervals and frequency spectra. According to the presumed concentration level, the system presents an odor with an awakening effect to maintain user's concentration. We performed experiments in which subjects did simple tasks with this system, changing the method of controlling odor emissions. We compared the results of each method and evaluated the effectiveness of our olfactory feedback system and the methods of controlling odor emissions. The comparison of the results showed that presenting an odor when a user loses his/her concentration is effective to decrease errors of addition tasks.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114606966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an application that allows the acquisition of 3D models in an open format that adjust to the device's requirements and specifications. By defining a format in X3D to describe each one of the kinds of objects we have considered as part of an urban model, such as blocks, buildings, cars, pedestrians, streetlights, and massive transportation, among others. This way, the application that requests the urban model can manipulate each of its components independently. The model stores not only graphic representation of objects, but also animation and setup of a certain scene. The application stores in the database scenes as independent objects that can be requested by applications. The application can deliver the same model in different LOD (level of detail) according to the device or application realizing the request.
{"title":"Scalable content for urban applications","authors":"Juan Diego Toro, P. Figueroa","doi":"10.1109/VR.2011.5759493","DOIUrl":"https://doi.org/10.1109/VR.2011.5759493","url":null,"abstract":"We present an application that allows the acquisition of 3D models in an open format that adjust to the device's requirements and specifications. By defining a format in X3D to describe each one of the kinds of objects we have considered as part of an urban model, such as blocks, buildings, cars, pedestrians, streetlights, and massive transportation, among others. This way, the application that requests the urban model can manipulate each of its components independently. The model stores not only graphic representation of objects, but also animation and setup of a certain scene. The application stores in the database scenes as independent objects that can be requested by applications. The application can deliver the same model in different LOD (level of detail) according to the device or application realizing the request.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"215 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115703534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuichi Hirano, Asako Kimura, F. Shibata, H. Tamura
In a mixed-reality (MR) environment, the appearance of touchable objects can be changed by superimposing a computer-generated image (CGI) onto them (MR visual stimulation). At the same time, when humans sense the hardness of real objects, it is known that their perception is influenced not only by tactile information but also by visual information. In this paper, we studied the psychophysical influence on the sense of hardness by using a real object that has a CGI superimposed on it. In this experiment, we deform in an extreme way the CGI animation on the real object, while the subject pushes the real object using his/her finger. The results of the experiments found that human subjects sensed different hardnesses by emphasizing the dent deformation of the CGI animation.
{"title":"Psychophysical influence of mixed-reality visual stimulation on sense of hardness","authors":"Yuichi Hirano, Asako Kimura, F. Shibata, H. Tamura","doi":"10.1109/VR.2011.5759436","DOIUrl":"https://doi.org/10.1109/VR.2011.5759436","url":null,"abstract":"In a mixed-reality (MR) environment, the appearance of touchable objects can be changed by superimposing a computer-generated image (CGI) onto them (MR visual stimulation). At the same time, when humans sense the hardness of real objects, it is known that their perception is influenced not only by tactile information but also by visual information. In this paper, we studied the psychophysical influence on the sense of hardness by using a real object that has a CGI superimposed on it. In this experiment, we deform in an extreme way the CGI animation on the real object, while the subject pushes the real object using his/her finger. The results of the experiments found that human subjects sensed different hardnesses by emphasizing the dent deformation of the CGI animation.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129080530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}