Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization最新文献
A. Huckauf, Mario H. Urbina, Jens Grubert, I. Böckelmann, Fabian Doil, L. Schega, Johannes Tümler, R. Mecke
Optical see-through devices enable observers to see additional information embedded in real environments. There is already some evidence of increasing visual load in respective systems. We investigated visual performance when users performed visual search tasks or dual tasks only on the optical see-through device, only on a computer screen, or switching between both. In spite of having controlled for basic differences between both devices, switching between the presentation devices produced costs in visual performance. The assumption that these decreases in performance are partly due to differences localizing the presented objects was confirmed by convergence data.
{"title":"Perceptual issues in optical-see-through displays","authors":"A. Huckauf, Mario H. Urbina, Jens Grubert, I. Böckelmann, Fabian Doil, L. Schega, Johannes Tümler, R. Mecke","doi":"10.1145/1836248.1836255","DOIUrl":"https://doi.org/10.1145/1836248.1836255","url":null,"abstract":"Optical see-through devices enable observers to see additional information embedded in real environments. There is already some evidence of increasing visual load in respective systems. We investigated visual performance when users performed visual search tasks or dual tasks only on the optical see-through device, only on a computer screen, or switching between both. In spite of having controlled for basic differences between both devices, switching between the presentation devices produced costs in visual performance. The assumption that these decreases in performance are partly due to differences localizing the presented objects was confirmed by convergence data.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"8 1","pages":"41-48"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81176218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a technique to calculate the saliency of animated meshes with material properties. The saliency computation considers multiple features of 3D meshes including their geometry, material and motion. Each feature contributes to the final saliency map which is view independent; and therefore, can be used for view dependent and view independent applications. To verify our saliency calculations, we performed an experiment in which we use an eye tracker to compare the saliencies of the regions that the viewers look with the other regions of the models. The results confirm that our saliency computation gives promising results. We also present several applications in which the saliency information is used.
{"title":"Saliency for animated meshes with material properties","authors":"A. Bulbul, Çetin Koca, T. Çapin, U. Güdükbay","doi":"10.1145/1836248.1836263","DOIUrl":"https://doi.org/10.1145/1836248.1836263","url":null,"abstract":"We propose a technique to calculate the saliency of animated meshes with material properties. The saliency computation considers multiple features of 3D meshes including their geometry, material and motion. Each feature contributes to the final saliency map which is view independent; and therefore, can be used for view dependent and view independent applications. To verify our saliency calculations, we performed an experiment in which we use an eye tracker to compare the saliencies of the regions that the viewers look with the other regions of the models. The results confirm that our saliency computation gives promising results. We also present several applications in which the saliency information is used.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"1 1","pages":"81-88"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83578807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maria Sifniotis, Ben J. C. Jackson, K. Mania, N. Vlassis, P. Watten, M. White
By uncertainty, we define an archaeological expert's level of confidence in an interpretation deriving from gathered evidence. Archaeologists and computer scientists have urged caution in the use of 3D for archaeological reconstructions because the availability of other possible hypotheses is not always being acknowledged. This poster presents a 3D visualization system of archaeological uncertainty.
{"title":"3D visualization of archaeological uncertainty","authors":"Maria Sifniotis, Ben J. C. Jackson, K. Mania, N. Vlassis, P. Watten, M. White","doi":"10.1145/1836248.1836284","DOIUrl":"https://doi.org/10.1145/1836248.1836284","url":null,"abstract":"By uncertainty, we define an archaeological expert's level of confidence in an interpretation deriving from gathered evidence. Archaeologists and computer scientists have urged caution in the use of 3D for archaeological reconstructions because the availability of other possible hypotheses is not always being acknowledged. This poster presents a 3D visualization system of archaeological uncertainty.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"28 1","pages":"162"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85746202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Timofey Grechkin, Sabarish V. Babu, Christine J. Ziemer, Benjamin Chihak, J. Cremer, J. Kearney, J. Plumert
A bike rider's distance from the roadway is one of the factors that determine the safety of the crossing. First, it dictates the vantage point from which the rider sees the oncoming traffic. Second, it governs the distance that must be crossed to clear the beam of oncoming traffic. This study investigated how the behavior of a virtual peer in an immersive bicycling simulator influences how far away from the roadway children are when they initiate crossing.
{"title":"How does a virtual peer influence children's distance from the roadway when initiating crossing?","authors":"Timofey Grechkin, Sabarish V. Babu, Christine J. Ziemer, Benjamin Chihak, J. Cremer, J. Kearney, J. Plumert","doi":"10.1145/1620993.1621023","DOIUrl":"https://doi.org/10.1145/1620993.1621023","url":null,"abstract":"A bike rider's distance from the roadway is one of the factors that determine the safety of the crossing. First, it dictates the vantage point from which the rider sees the oncoming traffic. Second, it governs the distance that must be crossed to clear the beam of oncoming traffic. This study investigated how the behavior of a virtual peer in an immersive bicycling simulator influences how far away from the roadway children are when they initiate crossing.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"31 1","pages":"129"},"PeriodicalIF":0.0,"publicationDate":"2009-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78076404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In our research we are interested in human sensitivity to differences in animal gaits. We use point light walkers as stimuli, and follow up with a study using a realistic 3D model. Previously it has been shown that humans can regonise human motion, gender and the identity of an actor from a set of moving points [1973; 1977]. McDonnell et al. [2008] show that both shape and motion influence sex perception of virtual human characters. Mather and West [1993] have shown that people can recognise animals from pointlight displays. In order to find out whether we can tell the difference between animals using motion cues, we captured the motion of farm animals.
{"title":"Human perception of quadruped motion","authors":"Ljiljana Skrba, C. O'Sullivan","doi":"10.1145/1620993.1621024","DOIUrl":"https://doi.org/10.1145/1620993.1621024","url":null,"abstract":"In our research we are interested in human sensitivity to differences in animal gaits. We use point light walkers as stimuli, and follow up with a study using a realistic 3D model. Previously it has been shown that humans can regonise human motion, gender and the identity of an actor from a set of moving points [1973; 1977]. McDonnell et al. [2008] show that both shape and motion influence sex perception of virtual human characters. Mather and West [1993] have shown that people can recognise animals from pointlight displays. In order to find out whether we can tell the difference between animals using motion cues, we captured the motion of farm animals.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"34 1","pages":"130"},"PeriodicalIF":0.0,"publicationDate":"2009-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87497138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An inadequately designed display viewed in the dark can easily cause dazzling glare and affect our night vision. In this paper we test a display design in which the spectral light emission is selected to reduce the impact of the display on night vision performance while at the same time ensuring good display legibility. We use long-wavelength light (red) that is easily visible to daylight vision photoreceptors (cones) but almost invisible to night vision photoreceptors (rods). We verify rod-cone separation in a psychophysical experiment, in which we measure contrast detection in the presence of a colored source of glare. In a separate user study we measure the range of display brightness settings that provide good legibility and are not distracting under low ambient lighting. Our results can serve as a guidelines for designing the displays that change their color scheme at low ambient light levels.
{"title":"Display considerations for night and low-illumination viewing","authors":"Rafał K. Mantiuk, Allan G. Rempel, W. Heidrich","doi":"10.1145/1620993.1621005","DOIUrl":"https://doi.org/10.1145/1620993.1621005","url":null,"abstract":"An inadequately designed display viewed in the dark can easily cause dazzling glare and affect our night vision. In this paper we test a display design in which the spectral light emission is selected to reduce the impact of the display on night vision performance while at the same time ensuring good display legibility. We use long-wavelength light (red) that is easily visible to daylight vision photoreceptors (cones) but almost invisible to night vision photoreceptors (rods). We verify rod-cone separation in a psychophysical experiment, in which we measure contrast detection in the presence of a colored source of glare. In a separate user study we measure the range of display brightness settings that provide good legibility and are not distracting under low ambient lighting. Our results can serve as a guidelines for designing the displays that change their color scheme at low ambient light levels.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"20 1","pages":"53-58"},"PeriodicalIF":0.0,"publicationDate":"2009-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80588883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visualizations of 3D spatial structures use various techniques such as user controlled interactions or 2D projection views to convey the structure to users. Researchers have shown that motion cues can help assimilate the structure of 3D spatial data, particularly for discerning occluded parts of the objects. However, motion cues or smooth animations also have costs - they increase the viewing time. What remains unclear is whether any one particular viewing time. What remains unclear is whether any one particular viewing modality allows users to understand and operate on the 3D structure as effectively as a combination of 2D and 3D static views. To assess the effectiveness of understanding 3D structures, we carried out three experiments. In all three experiments we evaluated the effectiveness of perceiving 3D structures with either self controlled interactions, animated transitions, and 2D+3D static views. In the first experiment, subjects were given a task to estimate the relative distances of objects in a 3D scene. In the second experiment, subjects made judgements to discern and identify the existence of differences between 3D objects. In the third experiment, participants were required to reconstruct a 3D spatial structure based on the 3D models presented to them. Results of the three experiments reveal that participants were more accurate and performed the spatial tasks faster with smooth animations and self-controlled interactions than with 2D+3D static views. Our results overall suggest that the costs involved in interacting or animating a 3D spatial structure are significantly outweighed by the perceptual benefits derived from viewing and interacting in these modes of presentation.
{"title":"Effects of animation, user-controlled interactions, and multiple static views in understanding 3D structures","authors":"Taylor Sando, Melanie Tory, Pourang Irani","doi":"10.1145/1620993.1621008","DOIUrl":"https://doi.org/10.1145/1620993.1621008","url":null,"abstract":"Visualizations of 3D spatial structures use various techniques such as user controlled interactions or 2D projection views to convey the structure to users. Researchers have shown that motion cues can help assimilate the structure of 3D spatial data, particularly for discerning occluded parts of the objects. However, motion cues or smooth animations also have costs - they increase the viewing time. What remains unclear is whether any one particular viewing time. What remains unclear is whether any one particular viewing modality allows users to understand and operate on the 3D structure as effectively as a combination of 2D and 3D static views. To assess the effectiveness of understanding 3D structures, we carried out three experiments. In all three experiments we evaluated the effectiveness of perceiving 3D structures with either self controlled interactions, animated transitions, and 2D+3D static views. In the first experiment, subjects were given a task to estimate the relative distances of objects in a 3D scene. In the second experiment, subjects made judgements to discern and identify the existence of differences between 3D objects. In the third experiment, participants were required to reconstruct a 3D spatial structure based on the 3D models presented to them. Results of the three experiments reveal that participants were more accurate and performed the spatial tasks faster with smooth animations and self-controlled interactions than with 2D+3D static views. Our results overall suggest that the costs involved in interacting or animating a 3D spatial structure are significantly outweighed by the perceptual benefits derived from viewing and interacting in these modes of presentation.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"44 1","pages":"69-76"},"PeriodicalIF":0.0,"publicationDate":"2009-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81571955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gurjot Singh, J. Swan, J. A. Jones, Lorraine Lin, S. Ellis
This poster describes a tabletop-based experiment which studied two complimentary depth judgment protocols and the effect of an occluding surface on depth judgments in augmented reality (AR). The experimental setup (Figure 1) broadly replicated the setup described by Ellis and Menges [1998], and studied near-field distances between 30 and 60 centimeters. We collected data from six participants; we consider this to be a pilot study. These distances are important for many AR applications that involve reaching and manipulating; examples include AR-assisted surgery and medical training devices, maintenance tasks, and table-top meetings where the participants are jointly interacting and manipulating shared virtual objects in the middle of the table. Some of these tasks involve "x-ray vision", where AR users perceive objects which are located behind solid, opaque surfaces. Ellis and Menges [1998] studied tabletop distances using a setup similar to Figure 1. They used a closed-loop perceptual matching task to examine near-field distances of 0.4 to 1.0 meters, and studied the effects of an occluding surface (the x-ray vision condition), convergence, accommodation, observer age, and monocular, biocular, and stereo AR displays. They found that monocular viewing degraded the depth judgment, and that the x-ray vision condition caused a change in vergence angle which resulted in depth judgments being biased towards the observer. They also found that cutting a hole in the occluding surface, which made the depth of the virtual object physically plausible, reduced the depth judgment bias. The experimental setup (Figure 1) involved a height-adjustable tabletop that allowed observers to easily reach both above and below the table. We used two complimentary dependent measures to assess depth judgments: we replicated the closed-loop matching task (Task = closed) of Ellis and Menges [1998]; observers manipulated a small light to match the depth of the bottom of a slowly rotating, upside-down pyramid (the target object). In addition, we used an open-loop blind reaching task (Task = open), in order to compare the closed-loop task to a more perceptually-motivated depth judgment. Our occluding surface was composed of circular foam-core covered with a highly-salient checkerboard pattern; when observers saw the occluder (Occluder = present, otherwise Occluder = absent) it was presented 10 cm in front of the target. We used a factorial, within-subjects experimental design; observers made binocular stereo depth judgments. Figure 2 shows the results by task, occluder, and distance; the results are grouped by task for clarity, and should be judged relative to the 45° veridical lines. Figure 3 shows the results by task and occluder, expressed as normalized error = judged distance / veridical distance. All conditions underestimated the veridical distance of 100% to some degree. The closed-loop task replicated the finding of Ellis and Menges [1998]: the presence of the occlud
{"title":"Depth judgment measures and occluders in near-field augmented reality","authors":"Gurjot Singh, J. Swan, J. A. Jones, Lorraine Lin, S. Ellis","doi":"10.1145/1620993.1621021","DOIUrl":"https://doi.org/10.1145/1620993.1621021","url":null,"abstract":"This poster describes a tabletop-based experiment which studied two complimentary depth judgment protocols and the effect of an occluding surface on depth judgments in augmented reality (AR). The experimental setup (Figure 1) broadly replicated the setup described by Ellis and Menges [1998], and studied near-field distances between 30 and 60 centimeters. We collected data from six participants; we consider this to be a pilot study.\u0000 These distances are important for many AR applications that involve reaching and manipulating; examples include AR-assisted surgery and medical training devices, maintenance tasks, and table-top meetings where the participants are jointly interacting and manipulating shared virtual objects in the middle of the table. Some of these tasks involve \"x-ray vision\", where AR users perceive objects which are located behind solid, opaque surfaces.\u0000 Ellis and Menges [1998] studied tabletop distances using a setup similar to Figure 1. They used a closed-loop perceptual matching task to examine near-field distances of 0.4 to 1.0 meters, and studied the effects of an occluding surface (the x-ray vision condition), convergence, accommodation, observer age, and monocular, biocular, and stereo AR displays. They found that monocular viewing degraded the depth judgment, and that the x-ray vision condition caused a change in vergence angle which resulted in depth judgments being biased towards the observer. They also found that cutting a hole in the occluding surface, which made the depth of the virtual object physically plausible, reduced the depth judgment bias.\u0000 The experimental setup (Figure 1) involved a height-adjustable tabletop that allowed observers to easily reach both above and below the table. We used two complimentary dependent measures to assess depth judgments: we replicated the closed-loop matching task (Task = closed) of Ellis and Menges [1998]; observers manipulated a small light to match the depth of the bottom of a slowly rotating, upside-down pyramid (the target object). In addition, we used an open-loop blind reaching task (Task = open), in order to compare the closed-loop task to a more perceptually-motivated depth judgment. Our occluding surface was composed of circular foam-core covered with a highly-salient checkerboard pattern; when observers saw the occluder (Occluder = present, otherwise Occluder = absent) it was presented 10 cm in front of the target. We used a factorial, within-subjects experimental design; observers made binocular stereo depth judgments.\u0000 Figure 2 shows the results by task, occluder, and distance; the results are grouped by task for clarity, and should be judged relative to the 45° veridical lines. Figure 3 shows the results by task and occluder, expressed as normalized error = judged distance / veridical distance. All conditions underestimated the veridical distance of 100% to some degree. The closed-loop task replicated the finding of Ellis and Menges [1998]: the presence of the occlud","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"6 1","pages":"127"},"PeriodicalIF":0.0,"publicationDate":"2009-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84868713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A number of computational models of visual attention have been proposed based on the concept of saliency map, most of them validated using oculometric data. They are widely used for Computer Graphics applications with Low Dynamic Range images, mainly for image rendering, in order to avoid spending too much computing time on non salient areas. However, these algorithms were not used so far with High Dynamic Range (HDR) inputs. In this paper, we show that in the case of HDR images, the predictions using algorithms based on [Itti and Koch 2000] are less accurate than with 8-bit images. To improve the saliency computation for HDR inputs, we propose a new algorithm derived from [Itti and Koch 2000]. From an eye tracking experiment with a HDR scene, we show that this algorithm leads to good results for the saliency map computation, with a better fit between the saliency map and the ocular fixation map than Itti's algorithm.
基于显著性图的概念,人们提出了许多视觉注意的计算模型,其中大多数模型都使用眼视数据进行验证。它们广泛用于具有低动态范围图像的计算机图形应用程序,主要用于图像渲染,以避免在非显著区域上花费过多的计算时间。然而,到目前为止,这些算法还没有用于高动态范围(HDR)输入。在本文中,我们表明,在HDR图像的情况下,使用基于[Itti和Koch 2000]的算法的预测不如8位图像准确。为了改进HDR输入的显著性计算,我们提出了一种源自[Itti and Koch 2000]的新算法。通过HDR场景的眼动追踪实验表明,该算法在显著性图计算上取得了较好的结果,显著性图与眼注视图的拟合优于Itti算法。
{"title":"Saliency maps of high dynamic range images","authors":"J. Petit, R. Brémond, Jean-Philippe Tarel","doi":"10.1145/1620993.1621028","DOIUrl":"https://doi.org/10.1145/1620993.1621028","url":null,"abstract":"A number of computational models of visual attention have been proposed based on the concept of saliency map, most of them validated using oculometric data. They are widely used for Computer Graphics applications with Low Dynamic Range images, mainly for image rendering, in order to avoid spending too much computing time on non salient areas. However, these algorithms were not used so far with High Dynamic Range (HDR) inputs. In this paper, we show that in the case of HDR images, the predictions using algorithms based on [Itti and Koch 2000] are less accurate than with 8-bit images. To improve the saliency computation for HDR inputs, we propose a new algorithm derived from [Itti and Koch 2000]. From an eye tracking experiment with a HDR scene, we show that this algorithm leads to good results for the saliency map computation, with a better fit between the saliency map and the ocular fixation map than Itti's algorithm.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"18 1","pages":"118-130"},"PeriodicalIF":0.0,"publicationDate":"2009-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90633161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Faces are a powerful and versatile communication channel. Physically, facial expressions contain a considerable amount of information, yet it is clear from stylized representations such as cartoons that not all of this information needs to be present for efficient processing of communicative intent. Here, we use a high-fidelity facial animation system to investigate the importance of two forms of spatial information (connectivity and the number of vertices) for the perception of intensity and the recognition of facial expressions. The simplest form of connectivity is point light faces. Since they show only the vertices, the motion and configuration of features can be seen but the higher-frequency spatial deformations cannot. In wireframe faces, additional information about spatial configuration and deformation is available. Finally, full-surface faces have the highest degree of static information. The results of two experiments are presented. In the first, the presence of motion was manipulated. In the second, the size of the images was varied. Overall, dynamic expressions performed better than static expressions and were largely impervious to the elimination of shape or connectivity information. Decreasing the size of the image had little effect until a critical size was reached. These results add to a growing body of evidence that shows the critical importance of dynamic information for processing of facial expressions: As long as motion information is present, very little spatial information is required.
{"title":"The interaction between motion and form in expression recognition","authors":"D. Cunningham, C. Wallraven","doi":"10.1145/1620993.1621002","DOIUrl":"https://doi.org/10.1145/1620993.1621002","url":null,"abstract":"Faces are a powerful and versatile communication channel. Physically, facial expressions contain a considerable amount of information, yet it is clear from stylized representations such as cartoons that not all of this information needs to be present for efficient processing of communicative intent. Here, we use a high-fidelity facial animation system to investigate the importance of two forms of spatial information (connectivity and the number of vertices) for the perception of intensity and the recognition of facial expressions. The simplest form of connectivity is point light faces. Since they show only the vertices, the motion and configuration of features can be seen but the higher-frequency spatial deformations cannot. In wireframe faces, additional information about spatial configuration and deformation is available. Finally, full-surface faces have the highest degree of static information. The results of two experiments are presented. In the first, the presence of motion was manipulated. In the second, the size of the images was varied. Overall, dynamic expressions performed better than static expressions and were largely impervious to the elimination of shape or connectivity information. Decreasing the size of the image had little effect until a critical size was reached. These results add to a growing body of evidence that shows the critical importance of dynamic information for processing of facial expressions: As long as motion information is present, very little spatial information is required.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"10 1","pages":"41-44"},"PeriodicalIF":0.0,"publicationDate":"2009-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75788207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}