Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization最新文献
A. Olivier, Jan Ondřej, J. Pettré, R. Kulpa, A. Crétual
Validating that a real user can correctly perceive the motion of a virtual human is first required to enable realistic interactions between real and virtual humans during navigation tasks through virtual reality equipment. In this paper we focus on collision avoidance tasks. Previous works stated that real humans are able to accurately estimate others' motion and to avoid collisions with anticipation. Our main contribution is to propose a perceptual evaluation of a simple virtual reality system. The goal is to assess whether real humans are also able to accurately estimate a virtual human motion before collision avoidance. Results show that, even through a simple system, users are able to correctly evaluate the situation of an interaction on the qualitative point of view. Especially, in comparison with real interactions, users accurately decide whether they should give way to the virtual human or not. However, on the quantitative point of view, it is not easy for users to determine whether they will collide with virtual humans or not. On one hand, deciding to give way or not is a two-choice problem. On the other hand, detecting future collision requires to determine whether some visual variables belong some interval or not. We discuss this problem in terms of bearing angle.
{"title":"Interaction between real and virtual humans during walking: perceptual evluation of a simple device","authors":"A. Olivier, Jan Ondřej, J. Pettré, R. Kulpa, A. Crétual","doi":"10.1145/1836248.1836271","DOIUrl":"https://doi.org/10.1145/1836248.1836271","url":null,"abstract":"Validating that a real user can correctly perceive the motion of a virtual human is first required to enable realistic interactions between real and virtual humans during navigation tasks through virtual reality equipment. In this paper we focus on collision avoidance tasks. Previous works stated that real humans are able to accurately estimate others' motion and to avoid collisions with anticipation. Our main contribution is to propose a perceptual evaluation of a simple virtual reality system. The goal is to assess whether real humans are also able to accurately estimate a virtual human motion before collision avoidance. Results show that, even through a simple system, users are able to correctly evaluate the situation of an interaction on the qualitative point of view. Especially, in comparison with real interactions, users accurately decide whether they should give way to the virtual human or not. However, on the quantitative point of view, it is not easy for users to determine whether they will collide with virtual humans or not. On one hand, deciding to give way or not is a two-choice problem. On the other hand, detecting future collision requires to determine whether some visual variables belong some interval or not. We discuss this problem in terms of bearing angle.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"23 1","pages":"117-124"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86556768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francisco López Luro, Ramón Mollá Vayá, V. Sundstedt
Computer games require players to interact with scenes while performing various tasks. In this paper an experimental game framework was developed to measure players perception to level of detail (LOD) changes in 3D models (for example a bunny), as shown in Figure 1. These models were unrelated to the task assigned to the player and located away from the area in which the task was being accomplished. An interactive task, such as a point and shoot game, triggers a top-down vision process. Performing a specific task can result in inattentional blindness (IB) for the player, which is the phenomenon of not being able to perceive things that are in plain sight. IB can allow for substantial simplifications of the objects in the scene unrelated to the task at hand. In this paper five experiments were conducted exploring peripheral LOD change detections during an interactive gaming task. In three of the five experiments different level of awareness for the same task were tested and it was found that only participants being fully aware of the 3D LOD changes were able to detect about 15% of them during the game. In the other two experiments and with the players fully aware of the LOD changes, the distance at which they were able to detect each change of resolution was measured, with different number of LOD levels used in both experiments.
{"title":"Exploring peripheral LOD change detections during interactive gaming tasks","authors":"Francisco López Luro, Ramón Mollá Vayá, V. Sundstedt","doi":"10.1145/1836248.1836262","DOIUrl":"https://doi.org/10.1145/1836248.1836262","url":null,"abstract":"Computer games require players to interact with scenes while performing various tasks. In this paper an experimental game framework was developed to measure players perception to level of detail (LOD) changes in 3D models (for example a bunny), as shown in Figure 1. These models were unrelated to the task assigned to the player and located away from the area in which the task was being accomplished. An interactive task, such as a point and shoot game, triggers a top-down vision process. Performing a specific task can result in inattentional blindness (IB) for the player, which is the phenomenon of not being able to perceive things that are in plain sight. IB can allow for substantial simplifications of the objects in the scene unrelated to the task at hand. In this paper five experiments were conducted exploring peripheral LOD change detections during an interactive gaming task. In three of the five experiments different level of awareness for the same task were tested and it was found that only participants being fully aware of the 3D LOD changes were able to detect about 15% of them during the game. In the other two experiments and with the players fully aware of the LOD changes, the distance at which they were able to detect each change of resolution was measured, with different number of LOD levels used in both experiments.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"13 1","pages":"73-80"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78365363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sequence comparison is a fundamental task in the biological sciences. Scientists often need to understand the similarities and differences between genetic sequences to understand evolution, to infer common function, or identify differences. Because the sequences are too long for manual examination, scientists rely on alignment tools that automatically identify subsequences that match between the sequences being compared. Numerous approaches for displaying and exploring alignments exist, and have been incorporated into a wide variety of tools. See [Procter et al. 2010] for a survey of several existing approaches.
序列比较是生物科学的一项基本任务。科学家经常需要了解基因序列之间的异同,以理解进化,推断共同的功能,或识别差异。由于这些序列太长,无法进行人工检查,科学家们依靠比对工具自动识别被比较序列之间匹配的子序列。存在许多显示和探索对齐的方法,并已被纳入各种各样的工具中。参见[Procter et al. 2010]对几种现有方法的调查。
{"title":"Perceptual principles for scalable sequence alignment visualization","authors":"Danielle Albers, Michael Gleicher","doi":"10.1145/1836248.1836286","DOIUrl":"https://doi.org/10.1145/1836248.1836286","url":null,"abstract":"Sequence comparison is a fundamental task in the biological sciences. Scientists often need to understand the similarities and differences between genetic sequences to understand evolution, to infer common function, or identify differences. Because the sequences are too long for manual examination, scientists rely on alignment tools that automatically identify subsequences that match between the sequences being compared. Numerous approaches for displaying and exploring alignments exist, and have been incorporated into a wide variety of tools. See [Procter et al. 2010] for a survey of several existing approaches.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"25 1","pages":"164"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75287695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jorge López-Moreno, V. Sundstedt, Francisco Sangorrin, D. Gutierrez
In this paper we explore the ability of the human visual system to detect inconsistencies in the illumination of objects in images. We specifically focus on objects being lit from different angles as the rest of the image. We present the results of three different tests, two with synthetic objects and a third one with digitally manipulated real images. Our results seem to agree with previous publications exploring the topic, but we extend them by providing quantifiable data which in turn suggest approximate perceptual thresholds. Given that light detection in single images is an ill-posed problem, these thresholds can provide valid error limits to related algorithms in different contexts, such as compositing or augmented reality.
{"title":"Measuring the perception of light inconsistencies","authors":"Jorge López-Moreno, V. Sundstedt, Francisco Sangorrin, D. Gutierrez","doi":"10.1145/1836248.1836252","DOIUrl":"https://doi.org/10.1145/1836248.1836252","url":null,"abstract":"In this paper we explore the ability of the human visual system to detect inconsistencies in the illumination of objects in images. We specifically focus on objects being lit from different angles as the rest of the image. We present the results of three different tests, two with synthetic objects and a third one with digitally manipulated real images. Our results seem to agree with previous publications exploring the topic, but we extend them by providing quantifiable data which in turn suggest approximate perceptual thresholds. Given that light detection in single images is an ill-posed problem, these thresholds can provide valid error limits to related algorithms in different contexts, such as compositing or augmented reality.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"22 1","pages":"25-32"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72787404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gurjot Singh, J. Edward Swan, J. Adam Jones, Stephen R. Ellis
In this paper we describe an apparatus and experiment that measured depth judgments in augmented reality at near-field distances of 34 to 50 centimeters. The experiment compared perceptual matching, a closed-loop task for measuring depth judgments, with blind reaching, a visually open-loop task for measuring depth judgments. The experiment also studied the effect of a highly salient occluding surface appearing behind, coincident with, and in front of a virtual object. The apparatus and closed-loop matching task were based on previous work by Ellis and Menges. The experiment found maximum average depth judgment errors of 5.5 cm, and found that the blind reaching judgments were less accurate than the perceptual matching judgments. The experiment found that the presence of a highly-salient occluding surface has a complicated effect on depth judgments, but does not lead to systematically larger or smaller errors.
{"title":"Depth judgment measures and occluding surfaces in near-field augmented reality","authors":"Gurjot Singh, J. Edward Swan, J. Adam Jones, Stephen R. Ellis","doi":"10.1145/1836248.1836277","DOIUrl":"https://doi.org/10.1145/1836248.1836277","url":null,"abstract":"In this paper we describe an apparatus and experiment that measured depth judgments in augmented reality at near-field distances of 34 to 50 centimeters. The experiment compared perceptual matching, a closed-loop task for measuring depth judgments, with blind reaching, a visually open-loop task for measuring depth judgments. The experiment also studied the effect of a highly salient occluding surface appearing behind, coincident with, and in front of a virtual object. The apparatus and closed-loop matching task were based on previous work by Ellis and Menges. The experiment found maximum average depth judgment errors of 5.5 cm, and found that the blind reaching judgments were less accurate than the perceptual matching judgments. The experiment found that the presence of a highly-salient occluding surface has a complicated effect on depth judgments, but does not lead to systematically larger or smaller errors.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"87 1","pages":"149-156"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83427022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura Kassler, Jeff Feasel, M. Lewek, F. Brooks, M. Whitton
When developing our Immersive Virtual Environment Rehabilitation Treadmill (IVERT) system (Figure 1), we observed the well known phenomenon of visuals feeling "too slow" compared to walking speed (Durgin 2005). The work reported here was motivated by needing a factor, the optical flow multiplier (OFM), to multiply by the treadmill speed to generate a viewpoint/visual speed that "felt right" to the IVERT users. Our most frequent use case will be the therapist setting the treadmill speed and then the program multiplying treadmill speed by the OFM to generate the viewpoint speed.
{"title":"Matching actual treadmill walking speed and visually perceived walking speed in a projection virtual environment","authors":"Laura Kassler, Jeff Feasel, M. Lewek, F. Brooks, M. Whitton","doi":"10.1145/1836248.1836283","DOIUrl":"https://doi.org/10.1145/1836248.1836283","url":null,"abstract":"When developing our Immersive Virtual Environment Rehabilitation Treadmill (IVERT) system (Figure 1), we observed the well known phenomenon of visuals feeling \"too slow\" compared to walking speed (Durgin 2005). The work reported here was motivated by needing a factor, the optical flow multiplier (OFM), to multiply by the treadmill speed to generate a viewpoint/visual speed that \"felt right\" to the IVERT users. Our most frequent use case will be the therapist setting the treadmill speed and then the program multiplying treadmill speed by the OFM to generate the viewpoint speed.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"44 1","pages":"161"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81010841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present the first Facial Action Coding System (FACS) valid model to be based on dynamic 3D scans of human faces for use in graphics and psychological research. The model consists of FACS Action Unit (AU) based parameters and has been independently validated by FACS experts. Using this model, we explore the perceptual differences between linear facial motions -- represented by a linear blend shape approach -- and real facial motions that have been synthesized through the 3D facial model. Through numerical measures and visualizations, we show that this latter type of motion is geometrically nonlinear in terms of its vertices. In experiments, we explore the perceptual benefits of nonlinear motion for different AUs. Our results are insightful for designers of animation systems both in the entertainment industry and in scientific research. They reveal a significant overall benefit to using captured nonlinear geometric vertex motion over linear blend shape motion. However, our findings suggest that not all motions need to be animated nonlinearly. The advantage may depend on the type of facial action being produced and the phase of the movement.
{"title":"Perception of linear and nonlinear motion properties using a FACS validated 3D facial model","authors":"D. Cosker, Eva G. Krumhuber, A. Hilton","doi":"10.1145/1836248.1836268","DOIUrl":"https://doi.org/10.1145/1836248.1836268","url":null,"abstract":"In this paper we present the first Facial Action Coding System (FACS) valid model to be based on dynamic 3D scans of human faces for use in graphics and psychological research. The model consists of FACS Action Unit (AU) based parameters and has been independently validated by FACS experts. Using this model, we explore the perceptual differences between linear facial motions -- represented by a linear blend shape approach -- and real facial motions that have been synthesized through the 3D facial model. Through numerical measures and visualizations, we show that this latter type of motion is geometrically nonlinear in terms of its vertices. In experiments, we explore the perceptual benefits of nonlinear motion for different AUs. Our results are insightful for designers of animation systems both in the entertainment industry and in scientific research. They reveal a significant overall benefit to using captured nonlinear geometric vertex motion over linear blend shape motion. However, our findings suggest that not all motions need to be animated nonlinearly. The advantage may depend on the type of facial action being produced and the phase of the movement.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"20 1","pages":"101-108"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78812850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sung-ye Kim, I. Woo, Ross Maciejewski, D. Ebert, T. Ropp, K. Thomas
In order to perform daily maintenance and repair tasks in complex electrical and mechanical systems, technicians commonly utilize a large number of diagrams and documents detailing system properties in both electronic and print formats. In electronic document views, users typically are only provided with traditional pan and zoom features; however, recent advances in information visualization and illustrative rendering styles should allow users to analyze documents in a more timely and accurate fashion. In this paper, we evaluate the effectiveness of rendering techniques focusing on methods of document/diagram highlighting, distortion, and navigation while preserving contextual information between related diagrams. We utilize our previously developed interactive visualization system (SDViz) for technical diagrams for a series of quantitative studies and an in-field evaluation of the system in terms of usability and usefulness. In the quantitative studies, subjects perform small tasks that are similar to actual maintenance work while using tools provided by our system. First, the effects of highlighting within a diagram and between multiple diagrams are evaluated. Second, we analyze the value of preserving highlighting as well as spatial information when switching between related diagrams, and then we present the effectiveness of distortion within a diagram. Finally, we discuss a field study of the system and report the results of our findings.
{"title":"Evaluating the effectiveness of visualization techniques for schematic diagrams in maintenance tasks","authors":"Sung-ye Kim, I. Woo, Ross Maciejewski, D. Ebert, T. Ropp, K. Thomas","doi":"10.1145/1836248.1836254","DOIUrl":"https://doi.org/10.1145/1836248.1836254","url":null,"abstract":"In order to perform daily maintenance and repair tasks in complex electrical and mechanical systems, technicians commonly utilize a large number of diagrams and documents detailing system properties in both electronic and print formats. In electronic document views, users typically are only provided with traditional pan and zoom features; however, recent advances in information visualization and illustrative rendering styles should allow users to analyze documents in a more timely and accurate fashion. In this paper, we evaluate the effectiveness of rendering techniques focusing on methods of document/diagram highlighting, distortion, and navigation while preserving contextual information between related diagrams. We utilize our previously developed interactive visualization system (SDViz) for technical diagrams for a series of quantitative studies and an in-field evaluation of the system in terms of usability and usefulness. In the quantitative studies, subjects perform small tasks that are similar to actual maintenance work while using tools provided by our system. First, the effects of highlighting within a diagram and between multiple diagrams are evaluated. Second, we analyze the value of preserving highlighting as well as spatial information when switching between related diagrams, and then we present the effectiveness of distortion within a diagram. Finally, we discuss a field study of the system and report the results of our findings.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"2 1","pages":"33-40"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79163420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we explore the perception of finger motions of virtual characters. In three experiments, designed to investigate finger animations, we asked the following questions: When are errors in finger motion noticeable? What are the consequences of these errors? What animation method should we recommend? We found that synchronization errors of as little as 0.1s can be detected, but that the perceptibility of errors is highly dependent on the type of motion. Errors in finger animations can change the interpretation of a scene even without altering its perceived quality. Finally, out of the four conditions tested -- original motion capture, no motions, keyframed animation and randomly selected motions -- the original motion captured movements were rated as having the highest quality.
{"title":"The perception of finger motions","authors":"S. Jörg, J. Hodgins, C. O'Sullivan","doi":"10.1145/1836248.1836273","DOIUrl":"https://doi.org/10.1145/1836248.1836273","url":null,"abstract":"In this paper, we explore the perception of finger motions of virtual characters. In three experiments, designed to investigate finger animations, we asked the following questions: When are errors in finger motion noticeable? What are the consequences of these errors? What animation method should we recommend? We found that synchronization errors of as little as 0.1s can be detected, but that the perceptibility of errors is highly dependent on the type of motion. Errors in finger animations can change the interpretation of a scene even without altering its perceived quality. Finally, out of the four conditions tested -- original motion capture, no motions, keyframed animation and randomly selected motions -- the original motion captured movements were rated as having the highest quality.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"22 1","pages":"129-133"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74036079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we investigate the ability of humans to determine the sex of conversing characters, based on audio and visual cues. We used a corpus of motions and sounds captured from three male and three female actors conversing about a range of topics. In our Unisensory Experiments, visual and auditory stimuli were presented separately to participants who rated how male or female they found them to be. In our Multisensory Experiments, audio and visual information were integrated to determine how they interacted. We found that audio was much easier to classify than motion, and that audio affected but did not saturate ratings when motion and audio were integrated. Finally, even when informative appearance cues were present, this did not help to disambiguate incongruent motion and audio.
{"title":"Movements and voices affect perceived sex of virtual conversers","authors":"R. Mcdonnell, C. O'Sullivan","doi":"10.1145/1836248.1836272","DOIUrl":"https://doi.org/10.1145/1836248.1836272","url":null,"abstract":"In this paper, we investigate the ability of humans to determine the sex of conversing characters, based on audio and visual cues. We used a corpus of motions and sounds captured from three male and three female actors conversing about a range of topics. In our Unisensory Experiments, visual and auditory stimuli were presented separately to participants who rated how male or female they found them to be. In our Multisensory Experiments, audio and visual information were integrated to determine how they interacted. We found that audio was much easier to classify than motion, and that audio affected but did not saturate ratings when motion and audio were integrated. Finally, even when informative appearance cues were present, this did not help to disambiguate incongruent motion and audio.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"29 1","pages":"125-128"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75301463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}