Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization最新文献
Previous research, which has used images of real human faces and mostly from the same facial expression database [Matsumoto and Ekman 1988], has shown that individuals perceive emotions universally across cultures. We conducted an experiment to determine whether culture affects the perception of emotions rendered on virtual faces. Specifically, we test the holistic perception hypothesis that individuals from collectivist cultures, such as East Asians, visually sample information from central regions of the face (near the top of the nose by the eyes), as opposed to sampling from specific features of the face. If the holistic perception hypothesis is true, then individuals will confuse emotional facial expressions that are different in terms of the shape of the mouth facial feature. Our stimuli were computer generated using a face graphical rendering tool, which affords a high level of experimental control for perception researchers.
{"title":"Does culture affect the perception of emotion in virtual faces?","authors":"P. Khooshabeh, J. Gratch, Lixing Huang, J. Tao","doi":"10.1145/1836248.1836287","DOIUrl":"https://doi.org/10.1145/1836248.1836287","url":null,"abstract":"Previous research, which has used images of real human faces and mostly from the same facial expression database [Matsumoto and Ekman 1988], has shown that individuals perceive emotions universally across cultures. We conducted an experiment to determine whether culture affects the perception of emotions rendered on virtual faces. Specifically, we test the holistic perception hypothesis that individuals from collectivist cultures, such as East Asians, visually sample information from central regions of the face (near the top of the nose by the eyes), as opposed to sampling from specific features of the face. If the holistic perception hypothesis is true, then individuals will confuse emotional facial expressions that are different in terms of the shape of the mouth facial feature. Our stimuli were computer generated using a face graphical rendering tool, which affords a high level of experimental control for perception researchers.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"29 1","pages":"165"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86926680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Alexandrova, Paolina T. Teneva, S. Rosa, Uwe Kloos, H. Bülthoff, B. Mohler
People underestimate egocentric distances in head-mounted display virtual environments, as compared to estimates done in the real world. Our work investigates whether distances are still compressed in a large screen display immersive virtual environment, where participants are able to see their own body surrounded by the virtual environment. We conducted our experiment in both the real world using a real room and the large screen display immersive virtual environment using a 3D model of the real room. Our results showed a significant underestimation of verbal reports of egocentric distances in the large screen display immersive virtual environment, while the distance judgments of the real world were closer to veridical. Moreover, we observed a significant effect of distances in both environments. In the real world closer distances were slightly underestimated, while further distances were slightly overestimated. In contrast to the real world in the virtual environment participants overestimated closer distances (up to 2.5m) and underestimated distances that were further than 3m. A possible reason for this effect of distances in the virtual environment may be that participants perceived stereo cues differently when the target was projected on the floor versus on the front of the large screen.
{"title":"Egocentric distance judgments in a large screen display immersive virtual environment","authors":"I. Alexandrova, Paolina T. Teneva, S. Rosa, Uwe Kloos, H. Bülthoff, B. Mohler","doi":"10.1145/1836248.1836258","DOIUrl":"https://doi.org/10.1145/1836248.1836258","url":null,"abstract":"People underestimate egocentric distances in head-mounted display virtual environments, as compared to estimates done in the real world. Our work investigates whether distances are still compressed in a large screen display immersive virtual environment, where participants are able to see their own body surrounded by the virtual environment. We conducted our experiment in both the real world using a real room and the large screen display immersive virtual environment using a 3D model of the real room. Our results showed a significant underestimation of verbal reports of egocentric distances in the large screen display immersive virtual environment, while the distance judgments of the real world were closer to veridical. Moreover, we observed a significant effect of distances in both environments. In the real world closer distances were slightly underestimated, while further distances were slightly overestimated. In contrast to the real world in the virtual environment participants overestimated closer distances (up to 2.5m) and underestimated distances that were further than 3m. A possible reason for this effect of distances in the virtual environment may be that participants perceived stereo cues differently when the target was projected on the floor versus on the front of the large screen.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"171 1","pages":"57-60"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73807513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuzhen Niu, Feng Liu, Xue-qing Li, Huiyun Bao, Michael Gleicher
Resizing images for different devices often involves changing the aspect ratio. A wide variety of approaches for resizing exist: sophisticated "content-aware" (or retargeting) approaches are built on the assumption that carefully chosen distortions are preferable to the naïve approach of uniformly stretching the image. However, there is little codified understanding of how distortions of the image, including uniform stretching or more complex warps introduced by retargeting, are perceived. In this paper, we describe experiments that explore the perception of image stretching, to establish the baseline for assessing more complex resizing methods, as well as to establish the methodology. In a series of experiments, we show that the perception of stretching is a complex phenomenon depending on a myriad of factors including the amount of distortion, the image content, the viewer's cultural background, and the observation time. We provide a methodology for creating images that avoid unfair cues to stretching and explore issues in using online worker communities for studies. We show that even small stretches can be detected in some cases. These findings have ramifications for the design and evaluation of image retargeting, and suggest that a more thorough study of distortion perception is necessary.
{"title":"Detection of image stretching","authors":"Yuzhen Niu, Feng Liu, Xue-qing Li, Huiyun Bao, Michael Gleicher","doi":"10.1145/1836248.1836266","DOIUrl":"https://doi.org/10.1145/1836248.1836266","url":null,"abstract":"Resizing images for different devices often involves changing the aspect ratio. A wide variety of approaches for resizing exist: sophisticated \"content-aware\" (or retargeting) approaches are built on the assumption that carefully chosen distortions are preferable to the naïve approach of uniformly stretching the image. However, there is little codified understanding of how distortions of the image, including uniform stretching or more complex warps introduced by retargeting, are perceived. In this paper, we describe experiments that explore the perception of image stretching, to establish the baseline for assessing more complex resizing methods, as well as to establish the methodology. In a series of experiments, we show that the perception of stretching is a complex phenomenon depending on a myriad of factors including the amount of distortion, the image content, the viewer's cultural background, and the observation time. We provide a methodology for creating images that avoid unfair cues to stretching and explore issues in using online worker communities for studies. We show that even small stretches can be detected in some cases. These findings have ramifications for the design and evaluation of image retargeting, and suggest that a more thorough study of distortion perception is necessary.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"7 1","pages":"93-100"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84356308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At any given instant, much of a display appears in a user's periphral vision. Based on the information available in a glance, the user moves their eyes, scanning the display for items of interest, and piecing together a coherent view of the display. Much of this processing happens unconsciously. Understanding the information available in the periphery can help design better information visualizations and user interfaces, by enabling designers to make displays that effectively guide eye movements and make important information available in a glance. However, it is difficult to attend to our peripheral vision to gain insights about the information available there. In this paper, we discuss a means of visualizing the information available in peripheral vision, given an image of a display and the current fixation. We show results of our model on several information visualizations.
{"title":"What your design looks like to peripheral vision","authors":"A. Raj, R. Rosenholtz","doi":"10.1145/1836248.1836264","DOIUrl":"https://doi.org/10.1145/1836248.1836264","url":null,"abstract":"At any given instant, much of a display appears in a user's periphral vision. Based on the information available in a glance, the user moves their eyes, scanning the display for items of interest, and piecing together a coherent view of the display. Much of this processing happens unconsciously. Understanding the information available in the periphery can help design better information visualizations and user interfaces, by enabling designers to make displays that effectively guide eye movements and make important information available in a glance. However, it is difficult to attend to our peripheral vision to gain insights about the information available there. In this paper, we discuss a means of visualizing the information available in peripheral vision, given an image of a display and the current fixation. We show results of our model on several information visualizations.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"56 1","pages":"89-92"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91031804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Nguyen, Timofey Grechkin, J. Cremer, J. Kearney, J. Plumert
There has been a substantial amount of research on two different but related problems: estimating distances of perceived objects ("how far away is that thing?") and estimating traveled distance ("how far did I just walk?"). For instance, Lappe et al [2007] recently examined a "leaky path integration" model to account for travel distance judgments in a virtual environment.
{"title":"Effect of measurement setting in judging traveled distance: additional evidence for underestimation of distance in virtual environments","authors":"T. Nguyen, Timofey Grechkin, J. Cremer, J. Kearney, J. Plumert","doi":"10.1145/1836248.1836281","DOIUrl":"https://doi.org/10.1145/1836248.1836281","url":null,"abstract":"There has been a substantial amount of research on two different but related problems: estimating distances of perceived objects (\"how far away is that thing?\") and estimating traveled distance (\"how far did I just walk?\"). For instance, Lappe et al [2007] recently examined a \"leaky path integration\" model to account for travel distance judgments in a virtual environment.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"9 1","pages":"159"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87511193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xianshi Xie, Qiufeng Lin, Haojie Wu, G. Narasimham, T. McNamara, J. Rieser, Bobby Bodenheimer
This paper evaluates the combination of two methods for adapting bipedal locomotion to explore virtual environments displayed on head-mounted displays (HMDs) within the confines of limited tracking spaces. We combine a method of changing the optic flow of locomotion, effectively scaling the translational gain, with a method of intervening and manipulating a user's locations in physical space while preserving their spatial awareness of the virtual space. This latter technique is called resetting. In two experiments, we evaluate both scaling the translational gain and resetting while a subject locomotes along a path and then turns to face a remembered object. We find that the two techniques can be effectively combined, although there is a cognitive cost to resetting.
{"title":"A system for exploring large virtual environments that combines scaled translational gain and interventions","authors":"Xianshi Xie, Qiufeng Lin, Haojie Wu, G. Narasimham, T. McNamara, J. Rieser, Bobby Bodenheimer","doi":"10.1145/1836248.1836260","DOIUrl":"https://doi.org/10.1145/1836248.1836260","url":null,"abstract":"This paper evaluates the combination of two methods for adapting bipedal locomotion to explore virtual environments displayed on head-mounted displays (HMDs) within the confines of limited tracking spaces. We combine a method of changing the optic flow of locomotion, effectively scaling the translational gain, with a method of intervening and manipulating a user's locations in physical space while preserving their spatial awareness of the virtual space. This latter technique is called resetting. In two experiments, we evaluate both scaling the translational gain and resetting while a subject locomotes along a path and then turns to face a remembered object. We find that the two techniques can be effectively combined, although there is a cognitive cost to resetting.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"11 1","pages":"65-72"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88308664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spatial awareness is crucial for human performance efficiency of any task that entails perception of space. Memory of spaces is an imperfect reflection of the cognitive activity (awareness states) that underlies performance in such environments. Furthermore, performance on these tasks may also be influenced by the context of the environment. This research investigates the effect of stereo viewing on object recognition after exposure to an immersive VE, in terms of both scene context and associated awareness states. The immersive simulation consisted of a radiosity-rendered room that was either populated by objects consistent with an office setting or by primitive objects located in similar positions. The simulation was displayed on a stereo head-tracked Head Mounted Display. Twenty-four participants across two visual conditions of varying depth cues (absence vs presence of stereo cues) were exposed to the VE and completed an object-based memory recognition task. Participants also reported one of four states of awareness following each recognition response which reflected whether visual mental imagery was induced during retrieval. Results revealed better memory of objects that were consistent with the environment context and associated with vivid memorial experiences when the space was viewed in stereo.
{"title":"The effect of stereo and context on memory and awareness states in immersive virtual environments","authors":"Adam Bennett, Matthew Coxon, K. Mania","doi":"10.1145/1836248.1836275","DOIUrl":"https://doi.org/10.1145/1836248.1836275","url":null,"abstract":"Spatial awareness is crucial for human performance efficiency of any task that entails perception of space. Memory of spaces is an imperfect reflection of the cognitive activity (awareness states) that underlies performance in such environments. Furthermore, performance on these tasks may also be influenced by the context of the environment. This research investigates the effect of stereo viewing on object recognition after exposure to an immersive VE, in terms of both scene context and associated awareness states. The immersive simulation consisted of a radiosity-rendered room that was either populated by objects consistent with an office setting or by primitive objects located in similar positions. The simulation was displayed on a stereo head-tracked Head Mounted Display. Twenty-four participants across two visual conditions of varying depth cues (absence vs presence of stereo cues) were exposed to the VE and completed an object-based memory recognition task. Participants also reported one of four states of awareness following each recognition response which reflected whether visual mental imagery was induced during retrieval. Results revealed better memory of objects that were consistent with the environment context and associated with vivid memorial experiences when the space was viewed in stereo.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"28 5 1","pages":"135-140"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76953071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Instancing cloned models is a powerful technique for reducing the time and space requirements of the storage and visualization of large populations of similar objects [McDonnell et. al. 2008]. This poster presents the results of two perceptual experiments into the application of cloning to plant populations.
{"title":"Perceptibility of clones in tree rendering","authors":"A. Purvis, V. Sundstedt","doi":"10.1145/1836248.1836288","DOIUrl":"https://doi.org/10.1145/1836248.1836288","url":null,"abstract":"Instancing cloned models is a powerful technique for reducing the time and space requirements of the storage and visualization of large populations of similar objects [McDonnell et. al. 2008]. This poster presents the results of two perceptual experiments into the application of cloning to plant populations.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"68 1","pages":"166"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83264509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
[Thurrell et al. 1998] first observed that the perceived speed of optic flow decreases in linear proportion to the increasing physical speed of locomotion on a treadmill and proposed this as a mechanism to discount from the visual signal retinal motion due to self-motion, described as an arthrovisual effect [Thurrell and Pelah 2005]. Since human locomotion consists of a complex of articulated movement, step parameters and associated afferent, efferent and efference copy signals, questions arise as to the relative contributions of these component messages to the reduction in the perception of optic flow speed (POFS). Here we report experiments [Casey 2010] on the role of step frequency (SF) previously proposed as a reliable estimate for perception of the speed of self-motion [Durgin et al. 2007; Dong et al. 2008].
[Thurrell et al. 1998]首次观察到,光流的感知速度随着跑步机上运动的物理速度的增加呈线性比例下降,并提出这是一种机制,可以从视觉信号中忽略视网膜运动,这是由于自我运动,被描述为关节视觉效应[Thurrell和Pelah 2005]。由于人体运动由关节运动、步进参数和相关的传入、传出和传出复制信号组成,因此这些组成信息对光流速度(POFS)感知的相对贡献产生了问题。在这里,我们报告了先前提出的步进频率(SF)作为对自我运动速度感知的可靠估计的作用的实验[Casey 2010] [Durgin et al. 2007];Dong et al. 2008]。
{"title":"Influence of step frequency on visual speed perception during locomotion","authors":"Rachael Casey, A. Pelah, J. Cameron, Joan Lasenby","doi":"10.1145/1836248.1836282","DOIUrl":"https://doi.org/10.1145/1836248.1836282","url":null,"abstract":"[Thurrell et al. 1998] first observed that the perceived speed of optic flow decreases in linear proportion to the increasing physical speed of locomotion on a treadmill and proposed this as a mechanism to discount from the visual signal retinal motion due to self-motion, described as an arthrovisual effect [Thurrell and Pelah 2005]. Since human locomotion consists of a complex of articulated movement, step parameters and associated afferent, efferent and efference copy signals, questions arise as to the relative contributions of these component messages to the reduction in the perception of optic flow speed (POFS). Here we report experiments [Casey 2010] on the role of step frequency (SF) previously proposed as a reliable estimate for perception of the speed of self-motion [Durgin et al. 2007; Dong et al. 2008].","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"67 1","pages":"160"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81113531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a solution for enhancing depth perception in a given 3D computer-generated scene. For this purpose, we propose a framework that decides on the suitable depth cues for a given scene and the rendering methods which provide these cues. First, the system calculates the importance of each depth cue using a fuzzy logic based algorithm which considers the target tasks in the application and the spatial layout of the scene. Then, a knapsack model is constructed to keep the balance between the rendering costs of the graphical methods that provide these cues and their contibution to depth perception. This cost-profit analysis step selects the proper rendering methods. In this work, we also present several objective and subjective experiments which show that our automated depth enhancement system is statistically (p < 0.05) better than the other method selection techniques that are tested.
{"title":"A framework for enhancing depth perception in computer graphics","authors":"Zeynep Cipiloglu, A. Bulbul, T. Çapin","doi":"10.1145/1836248.1836276","DOIUrl":"https://doi.org/10.1145/1836248.1836276","url":null,"abstract":"This paper introduces a solution for enhancing depth perception in a given 3D computer-generated scene. For this purpose, we propose a framework that decides on the suitable depth cues for a given scene and the rendering methods which provide these cues. First, the system calculates the importance of each depth cue using a fuzzy logic based algorithm which considers the target tasks in the application and the spatial layout of the scene. Then, a knapsack model is constructed to keep the balance between the rendering costs of the graphical methods that provide these cues and their contibution to depth perception. This cost-profit analysis step selects the proper rendering methods. In this work, we also present several objective and subjective experiments which show that our automated depth enhancement system is statistically (p < 0.05) better than the other method selection techniques that are tested.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"131 1","pages":"141-148"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78451935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}