Kiwan Han, J. Ku, Hyeongrae Lee, Jinsick Park, Sangwoo Cho, Jae-Jin Kim, I. Kim, Sun I. Kim
Expressions are a basic necessity for daily living, as they are required for managing relationships with other people. Conventional expression training has difficulty achieving an objective measurement, because their assessment depends on the therapist's ability to assess a patient's state or training effectiveness. In addition, it is difficult to provide emotional and social situations in the same manner for each training or assessment session. Virtual reality techniques can overcome shortcomings occurring in conventional studies by providing exact and objective measurements and emotional and social situations. In this study, we developed a virtual reality prototype that could present emotional situation and measure expression characteristics. Although this is a preliminary study, it could be considered that this study shows the potential of virtual reality as an assessment tool.
{"title":"Measurement of Expression Characteristics in Emotional Situations using Virtual Reality","authors":"Kiwan Han, J. Ku, Hyeongrae Lee, Jinsick Park, Sangwoo Cho, Jae-Jin Kim, I. Kim, Sun I. Kim","doi":"10.1109/VR.2009.4811047","DOIUrl":"https://doi.org/10.1109/VR.2009.4811047","url":null,"abstract":"Expressions are a basic necessity for daily living, as they are required for managing relationships with other people. Conventional expression training has difficulty achieving an objective measurement, because their assessment depends on the therapist's ability to assess a patient's state or training effectiveness. In addition, it is difficult to provide emotional and social situations in the same manner for each training or assessment session. Virtual reality techniques can overcome shortcomings occurring in conventional studies by providing exact and objective measurements and emotional and social situations. In this study, we developed a virtual reality prototype that could present emotional situation and measure expression characteristics. Although this is a preliminary study, it could be considered that this study shows the potential of virtual reality as an assessment tool.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133818036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce new features for the broad phase algorithm sweep and prune that increase scalability for large virtual reality environments and allow for efficient AABB insertion and removal to support dynamic object creation and destruction. We introduce a novel segmented interval list structure that allows AABB insertion and removal without requiring a full sort of the axes. This algorithm is well-suited to large environments in which many objects are not moving at once. We analyze and test implementations of sweep and prune that include subdivision, batch insertion and removal, and segmented interval lists. Our tests show these techniques provide higher performance than previous sweep and prune methods, and perform better than octrees in temporally coherent environments.
{"title":"Efficient Large-Scale Sweep and Prune Methods with AABB Insertion and Removal","authors":"Daniel J. Tracy, S. Buss, Bryan M. Woods","doi":"10.1109/VR.2009.4811022","DOIUrl":"https://doi.org/10.1109/VR.2009.4811022","url":null,"abstract":"We introduce new features for the broad phase algorithm sweep and prune that increase scalability for large virtual reality environments and allow for efficient AABB insertion and removal to support dynamic object creation and destruction. We introduce a novel segmented interval list structure that allows AABB insertion and removal without requiring a full sort of the axes. This algorithm is well-suited to large environments in which many objects are not moving at once. We analyze and test implementations of sweep and prune that include subdivision, batch insertion and removal, and segmented interval lists. Our tests show these techniques provide higher performance than previous sweep and prune methods, and perform better than octrees in temporally coherent environments.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115029328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Besides visual validations of virtual car models, immersive applications like a Virtual Seating Buck enable car designers and engineers to decide product related issues without building expensive hardware prototypes. For replacing real models, it is mandatory that decision makers can rely on VR-based findings. However, especially when using a Head Mounted Display, users complain about an unnatural perception of space. Such misperceptions have already been reported in literature where several evaluation methods have been proposed for researching possible causes. Unfortunately, most of the methods do not represent the scenarios usually found in the automotive industry, since they focus on too large distances of five to fifteen meters. In this paper, we present an evaluation scenario adapted to size and distance perception within the reach of the user. With this method, we analyzed our standard setups and found a systematic error that is lower than aberrations reported by earlier research work. Furthermore, we tried to mitigate perception errors by a Depth of Field Blur applied to the virtual images.
{"title":"Issues with Virtual Space Perception within Reaching Distance: Mitigating Adverse Effects on Applications Using HMDs in the Automotive Industry","authors":"Mathias Moehring, Antje Gloystein, R. Dörner","doi":"10.1109/VR.2009.4811027","DOIUrl":"https://doi.org/10.1109/VR.2009.4811027","url":null,"abstract":"Besides visual validations of virtual car models, immersive applications like a Virtual Seating Buck enable car designers and engineers to decide product related issues without building expensive hardware prototypes. For replacing real models, it is mandatory that decision makers can rely on VR-based findings. However, especially when using a Head Mounted Display, users complain about an unnatural perception of space. Such misperceptions have already been reported in literature where several evaluation methods have been proposed for researching possible causes. Unfortunately, most of the methods do not represent the scenarios usually found in the automotive industry, since they focus on too large distances of five to fifteen meters. In this paper, we present an evaluation scenario adapted to size and distance perception within the reach of the user. With this method, we analyzed our standard setups and found a systematic error that is lower than aberrations reported by earlier research work. Furthermore, we tried to mitigate perception errors by a Depth of Field Blur applied to the virtual images.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124234143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trials on the transmission of olfactory information together with audio/visual information are currently underway. However, a problem exists in that continuous emission of scent leaves scent in the air causing human olfactory adaptation. To resolve this problem, we aimed at minimizing the quantity of scent ejected using an ink-jet olfactory display developed. Following the development of a breath sensor for breath synchronization, we next developed an olfactory ejection system to present scent on each inspiration. We then measured human olfactory characteristics in order to determine the most suitable method for presenting scent on an inspiration. Experiments revealed that the intensity of scent perceived by the user was altered by differences in the presentation method even when the quantity of scent was unchanged. We present here a method of odor presentation that most effectively minimizes the ejection quantities.
{"title":"Effective Presentation Technique of Scent Using Small Ejection Quantities of Odor","authors":"Junta Sato, Kaori Ohtsu, Yuichi Bannai, Ken-ichi Okada","doi":"10.1109/VR.2009.4811015","DOIUrl":"https://doi.org/10.1109/VR.2009.4811015","url":null,"abstract":"Trials on the transmission of olfactory information together with audio/visual information are currently underway. However, a problem exists in that continuous emission of scent leaves scent in the air causing human olfactory adaptation. To resolve this problem, we aimed at minimizing the quantity of scent ejected using an ink-jet olfactory display developed. Following the development of a breath sensor for breath synchronization, we next developed an olfactory ejection system to present scent on each inspiration. We then measured human olfactory characteristics in order to determine the most suitable method for presenting scent on an inspiration. Experiments revealed that the intensity of scent perceived by the user was altered by differences in the presentation method even when the quantity of scent was unchanged. We present here a method of odor presentation that most effectively minimizes the ejection quantities.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124960784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Designing low end-to-end latency system architectures for virtual reality is still an open and challenging problem. We describe the design, implementation and evaluation of a client-server depth-image warping architecture that updates and displays the scene graph at the refresh rate of the display. Our approach works for scenes consisting of dynamic and interactive objects. The end-to-end latency is minimized as well as smooth object motion generated. However, this comes at the expense of image quality inherent to warping techniques. We evaluate the architecture and its design trade-offs by comparing latency and image quality to a conventional rendering system. Our experience with the system confirms that the approach facilitates common interaction tasks such as navigation and object manipulation.
{"title":"An Image-Warping Architecture for VR: Low Latency versus Image Quality","authors":"F. Smit, R. V. Liere, S. Beck, B. Fröhlich","doi":"10.1109/VR.2009.4810995","DOIUrl":"https://doi.org/10.1109/VR.2009.4810995","url":null,"abstract":"Designing low end-to-end latency system architectures for virtual reality is still an open and challenging problem. We describe the design, implementation and evaluation of a client-server depth-image warping architecture that updates and displays the scene graph at the refresh rate of the display. Our approach works for scenes consisting of dynamic and interactive objects. The end-to-end latency is minimized as well as smooth object motion generated. However, this comes at the expense of image quality inherent to warping techniques. We evaluate the architecture and its design trade-offs by comparing latency and image quality to a conventional rendering system. Our experience with the system confirms that the approach facilitates common interaction tasks such as navigation and object manipulation.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115299798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samantha L. Finkelstein, A. Nickel, Lane Harrison, Evan A. Suma, T. Barnes
This paper presents the design of the final stage of a new game currently in development, entitled cMotion, which will use virtual humans to teach emotion recognition and programming concepts to children. Having multiple facets, cMotion is designed to teach the intended users how to recognize facial expressions and manipulate an interactive virtual character using a visual drag-and-drop programming interface. By creating a game which contextualizes emotions, we hope to foster learning of both emotions in a cultural context and computer programming concepts in children. The game will be completed in three stages which will each be tested separately: a playable introduction which focuses on social skills and emotion recognition, an interactive interface which focuses on computer programming, and a full game which combines the first two stages into one activity.
{"title":"cMotion: A New Game Design to Teach Emotion Recognition and Programming Logic to Children using Virtual Humans","authors":"Samantha L. Finkelstein, A. Nickel, Lane Harrison, Evan A. Suma, T. Barnes","doi":"10.1109/VR.2009.4811039","DOIUrl":"https://doi.org/10.1109/VR.2009.4811039","url":null,"abstract":"This paper presents the design of the final stage of a new game currently in development, entitled cMotion, which will use virtual humans to teach emotion recognition and programming concepts to children. Having multiple facets, cMotion is designed to teach the intended users how to recognize facial expressions and manipulate an interactive virtual character using a visual drag-and-drop programming interface. By creating a game which contextualizes emotions, we hope to foster learning of both emotions in a cultural context and computer programming concepts in children. The game will be completed in three stages which will each be tested separately: a playable introduction which focuses on social skills and emotion recognition, an interactive interface which focuses on computer programming, and a full game which combines the first two stages into one activity.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127161425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Steptoe, Oyewole Oyekoya, A. Murgia, R. Wolff, John P Rae, Estefania Guimaraes, D. Roberts, A. Steed
In face-to-face collaboration, eye gaze is used both as a bidirectional signal to monitor and indicate focus of attention and action, as well as a resource to manage the interaction. In remote interaction supported by Immersive Collaborative Virtual Environments (ICVEs), embodied avatars representing and controlled by each participant share a virtual space. We report on a study designed to evaluate methods of avatar eye gaze control during an object-focused puzzle scenario performed between three networked CAVETM-like systems. We compare tracked gaze, in which avatars' eyes are controlled by head-mounted mobile eye trackers worn by participants, to a gaze model informed by head orientation for saccade generation, and static gaze featuring non-moving eyes. We analyse task performance, subjective user experience, and interactional behaviour. While not providing statistically significant benefit over static gaze, tracked gaze is observed as the highest performing condition. However, the gaze model resulted in significantly lower task performance and increased error rate.
{"title":"Eye Tracking for Avatar Eye Gaze Control During Object-Focused Multiparty Interaction in Immersive Collaborative Virtual Environments","authors":"W. Steptoe, Oyewole Oyekoya, A. Murgia, R. Wolff, John P Rae, Estefania Guimaraes, D. Roberts, A. Steed","doi":"10.1109/VR.2009.4811003","DOIUrl":"https://doi.org/10.1109/VR.2009.4811003","url":null,"abstract":"In face-to-face collaboration, eye gaze is used both as a bidirectional signal to monitor and indicate focus of attention and action, as well as a resource to manage the interaction. In remote interaction supported by Immersive Collaborative Virtual Environments (ICVEs), embodied avatars representing and controlled by each participant share a virtual space. We report on a study designed to evaluate methods of avatar eye gaze control during an object-focused puzzle scenario performed between three networked CAVETM-like systems. We compare tracked gaze, in which avatars' eyes are controlled by head-mounted mobile eye trackers worn by participants, to a gaze model informed by head orientation for saccade generation, and static gaze featuring non-moving eyes. We analyse task performance, subjective user experience, and interactional behaviour. While not providing statistically significant benefit over static gaze, tracked gaze is observed as the highest performing condition. However, the gaze model resulted in significantly lower task performance and increased error rate.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121934079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sangwoo Cho, J. Ku, Kiwan Han, Hyeongrae Lee, Jinsick Park, Y. Kang, I. Kim, Sun I. Kim
In this study, we confirmed proprioception training effect of patients with hemiplegia by manipulating visual feedback. Six patients with hemiplegia were participated in the experiment. Patients have trained with the reaching task with visual feedback without visual feedback for two weeks. Patients were evaluated with pre-, middle test and post-test with the task with and without visual feedback. In the results, the first-click error distance after the training of the reaching task was reduced when they got the training with the task removed visual feedback. In addition, the performance velocity profile of reaching movement formed an inverse U shape after the training. In conclusion, visual feedback manipulation using virtual reality could provide a tool for training reaching movement by enforcing to use their proprioception, which enhances reaching movement skills for patients with hemiplegia.
{"title":"Effect of Proprioception Training of patient with Hemiplegia by Manipulating Visual Feedback using Virtual Reality: The Preliminary results","authors":"Sangwoo Cho, J. Ku, Kiwan Han, Hyeongrae Lee, Jinsick Park, Y. Kang, I. Kim, Sun I. Kim","doi":"10.1109/VR.2009.4811056","DOIUrl":"https://doi.org/10.1109/VR.2009.4811056","url":null,"abstract":"In this study, we confirmed proprioception training effect of patients with hemiplegia by manipulating visual feedback. Six patients with hemiplegia were participated in the experiment. Patients have trained with the reaching task with visual feedback without visual feedback for two weeks. Patients were evaluated with pre-, middle test and post-test with the task with and without visual feedback. In the results, the first-click error distance after the training of the reaching task was reduced when they got the training with the task removed visual feedback. In addition, the performance velocity profile of reaching movement formed an inverse U shape after the training. In conclusion, visual feedback manipulation using virtual reality could provide a tool for training reaching movement by enforcing to use their proprioception, which enhances reaching movement skills for patients with hemiplegia.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"2 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134245750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Matsukura, Hitoshi Yoshida, H. Ishida, T. Nakamoto
This article describes the experiments on an interactive application of an olfactory display system into which computational fluid dynamics (CFD) simulation is incorporated. In the proposed system, the olfactory display is used to add special effects to movies and virtual reality systems by releasing odors relevant to the scenes shown on the computer screen. To provide high-presence olfactory stimuli to the users, a model of the environment shown in the scene is provided to a CFD solver. The airflow field in the environment and the dispersal of odor molecules from their source are then calculated. An odor blender is used to generate the odor with the concentration determined based on the calculated odor distribution. In the experiments, a virtual room was presented on a PC monitor, and the panel were asked to stroll in the room to find an odor source. The results showed the effectiveness of the CFD simulation in reproducing the spatial distribution of the odor in the virtual space.
{"title":"Interactive Odor Playback Based on Fluid Dynamics Simulation","authors":"H. Matsukura, Hitoshi Yoshida, H. Ishida, T. Nakamoto","doi":"10.1109/VR.2009.4811042","DOIUrl":"https://doi.org/10.1109/VR.2009.4811042","url":null,"abstract":"This article describes the experiments on an interactive application of an olfactory display system into which computational fluid dynamics (CFD) simulation is incorporated. In the proposed system, the olfactory display is used to add special effects to movies and virtual reality systems by releasing odors relevant to the scenes shown on the computer screen. To provide high-presence olfactory stimuli to the users, a model of the environment shown in the scene is provided to a CFD solver. The airflow field in the environment and the dispersal of odor molecules from their source are then calculated. An odor blender is used to generate the odor with the concentration determined based on the calculated odor distribution. In the experiments, a virtual room was presented on a PC monitor, and the panel were asked to stroll in the room to find an odor source. The results showed the effectiveness of the CFD simulation in reproducing the spatial distribution of the odor in the virtual space.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134223178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes virtual social perspective-taking (VSP). In VSP, users are immersed in an experience of another person to aid in understanding the person's perspective. Users are immersed by 1) providing input to user senses from logs of the target person's senses, 2) instructing users to act and interact like the target, and 3) reminding users that they are playing the role of the target. These guidelines are applied to a scenario where taking the perspective of others is crucial - the medical interview. A pilot study (n = 16) using this scenario indicates VSP elicits reflection on the perspectives of others and changes behavior in future, similar social interactions. By encouraging reflection and change, VSP advances the state-of-the-art in training social interactions with virtual experiences.
{"title":"Virtual Experiences for Social Perspective-Taking","authors":"A. Raij, Aaron Kotranza, D. Lind, Benjamin C. Lok","doi":"10.1109/VR.2009.4811005","DOIUrl":"https://doi.org/10.1109/VR.2009.4811005","url":null,"abstract":"This paper proposes virtual social perspective-taking (VSP). In VSP, users are immersed in an experience of another person to aid in understanding the person's perspective. Users are immersed by 1) providing input to user senses from logs of the target person's senses, 2) instructing users to act and interact like the target, and 3) reminding users that they are playing the role of the target. These guidelines are applied to a scenario where taking the perspective of others is crucial - the medical interview. A pilot study (n = 16) using this scenario indicates VSP elicits reflection on the perspectives of others and changes behavior in future, similar social interactions. By encouraging reflection and change, VSP advances the state-of-the-art in training social interactions with virtual experiences.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133997114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}