A. Fukumoto, K. Tsukada, J. Kurumizawa, DongSheng Cai
Many scientists and artists explored association between color and music applying various theories (eg., perceptional experiments and tone-color correspondence schemes)._For example, Scriabin, and Kandinsky explored the artistic possibilities of the simultaneous playing of colors and music by playing color organ. They explored the emotional and perceptual dynamics of simultaneous presentations of color and sound.
{"title":"Exploring artistic possibilities of sensory fusion of color and music","authors":"A. Fukumoto, K. Tsukada, J. Kurumizawa, DongSheng Cai","doi":"10.1145/1272582.1272620","DOIUrl":"https://doi.org/10.1145/1272582.1272620","url":null,"abstract":"Many scientists and artists explored association between color and music applying various theories (eg., perceptional experiments and tone-color correspondence schemes)._For example, Scriabin, and Kandinsky explored the artistic possibilities of the simultaneous playing of colors and music by playing color organ. They explored the emotional and perceptual dynamics of simultaneous presentations of color and sound.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128568324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Mcdonnell, S. Jörg, J. Hodgins, F. Newell, C. O'Sullivan
An experiment to determine factors that influence the perceived sex of virtual characters was conducted. Four different model types were used: highly realistic male and female models, an androgynous character, and a point light walker. Three different types of motion were applied to all models: motion captured male and female walks, and neutral synthetic walks. We found that both form and motion influence sex perception for these characters: for neutral synthetic motions, form determines perceived sex, whereas natural motion affects the perceived sex of both androgynous and realistic forms. These results have implications on variety and realism when simulating large crowds of virtual characters.
{"title":"Virtual shapers & movers: form and motion affect sex perception","authors":"R. Mcdonnell, S. Jörg, J. Hodgins, F. Newell, C. O'Sullivan","doi":"10.1145/1272582.1272584","DOIUrl":"https://doi.org/10.1145/1272582.1272584","url":null,"abstract":"An experiment to determine factors that influence the perceived sex of virtual characters was conducted. Four different model types were used: highly realistic male and female models, an androgynous character, and a point light walker. Three different types of motion were applied to all models: motion captured male and female walks, and neutral synthetic walks. We found that both form and motion influence sex perception for these characters: for neutral synthetic motions, form determines perceived sex, whereas natural motion affects the perceived sex of both androgynous and realistic forms. These results have implications on variety and realism when simulating large crowds of virtual characters.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121233870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In many domains it is very important that observers form an accurate percept of 3-dimensional structure from 2-dimensional images of scenes or objects. This is particularly relevant for designers who need to make decisions concerning the refinement of novel objects that haven't been physically built yet. This study presents the results of two experiments whose goal was to test the effect of lighting direction on the shape perception of smooth surfaces using shading and lighting techniques commonly used in modeling and design software. The first experiment consisted of a 2 alternate forced choice task which compared the effect of the amount of shape difference between smooth surfaces lit by a single point light with whether the position of the light sources were the same or different for each surface. Results show that, as the difference between the shapes decreased, participants were more and more biased towards choosing the match shape lit by the same source as the test shape. In the second experiment, participants had to report the orientation at equivalent probe locations on pairs of smooth surfaces presented simultaneously, using gauge figures. The surfaces could either be the same or slightly different and the light source of each shape could either be at the same relative location or offset by 90° horizontally. Participants reported large differences in surface orientation when the lighting condition was different, even when the shapes were the same, confirming the first results. Our findings show that lighting conditions can have a strong effect on 3-dimensional perception, and suggest that great care should be taken when projection systems are used for 3D visualisation where an accurate representation is required, either by carefully choosing lighting conditions or by using more realistic rendering techniques.
{"title":"Distortion in 3D shape estimation with changes in illumination","authors":"F. Caniard, R. Fleming","doi":"10.1145/1272582.1272602","DOIUrl":"https://doi.org/10.1145/1272582.1272602","url":null,"abstract":"In many domains it is very important that observers form an accurate percept of 3-dimensional structure from 2-dimensional images of scenes or objects. This is particularly relevant for designers who need to make decisions concerning the refinement of novel objects that haven't been physically built yet. This study presents the results of two experiments whose goal was to test the effect of lighting direction on the shape perception of smooth surfaces using shading and lighting techniques commonly used in modeling and design software. The first experiment consisted of a 2 alternate forced choice task which compared the effect of the amount of shape difference between smooth surfaces lit by a single point light with whether the position of the light sources were the same or different for each surface. Results show that, as the difference between the shapes decreased, participants were more and more biased towards choosing the match shape lit by the same source as the test shape. In the second experiment, participants had to report the orientation at equivalent probe locations on pairs of smooth surfaces presented simultaneously, using gauge figures. The surfaces could either be the same or slightly different and the light source of each shape could either be at the same relative location or offset by 90° horizontally. Participants reported large differences in surface orientation when the lighting condition was different, even when the shapes were the same, confirming the first results. Our findings show that lighting conditions can have a strong effect on 3-dimensional perception, and suggest that great care should be taken when projection systems are used for 3D visualisation where an accurate representation is required, either by carefully choosing lighting conditions or by using more realistic rendering techniques.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124746034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ten years ago Greenberg and colleagues presented their framework for realistic image synthesis [Greenberg et al. 1997], aiming "to develop physically based lighting models and perceptually based rendering procedures for computer graphics that will produce synthetic images that are visually and measurably indistinguishable from real-world images", paraphrasing Sutherland's 'ultimate display' [Sutherland 1965]. They specifically encouraged vision researchers to use natural, complex and three-dimensional (3D) visual displays to get a better understanding of human vision and to develop more comprehensive visual models for computer graphics that will improve the efficiency of algorithms. In this paper we follow Greenberg et al.'s directive and analyse colour and luminance gradients in a complex 3D scene. The gradients arise from changes in the light source position and orientation of surfaces. Information in image gradients could apprise the visual system about intrinsic surface reflectance properties or extrinsic illumination phenomena, including shading, shadowing and inter-reflections. Colour gradients induced by inter-reflection may play a similar role to that of luminance gradients in shape-from-shading algorithms; it has been shown that 3D shape perception modulates the influence of inter-reflections on surface colour perception [Bloj et al. 1999]. Here we report a psychophysical study in which we tested whether observers were able to discriminate between gradients due to different light source positions and found that observers reliably detected a change in the gradient information when the light source position differed by only 4 deg from the reference scene (Experiment 1). This sensitivity was mainly based on the luminance information in the gradient (Experiment 2 and 3). We conclude that for a realistic impression of a scene a global illumination algorithm should model the luminance component of inter-reflections accurately, whereas it is less critical to accurately represent the spatial variation in chromaticity.
十年前,Greenberg及其同事提出了他们的现实图像合成框架[Greenberg等人,1997],旨在“开发基于物理的照明模型和基于感知的计算机图形渲染程序,这些程序将产生在视觉上和可测量上与现实世界图像无法区分的合成图像”,转述Sutherland的“终极显示”[Sutherland 1965]。他们特别鼓励视觉研究人员使用自然、复杂和三维(3D)视觉显示来更好地理解人类视觉,并为计算机图形学开发更全面的视觉模型,从而提高算法的效率。在本文中,我们遵循格林伯格等人的指令和分析颜色和亮度梯度在一个复杂的3D场景。这些梯度是由光源位置和表面方向的变化引起的。图像梯度中的信息可以告知视觉系统表面的固有反射特性或外部照明现象,包括明暗、阴影和互反射。在形状-阴影算法中,由相互反射引起的颜色梯度可能起到与亮度梯度相似的作用;三维形状感知调节了相互反射对表面颜色感知的影响[Bloj et al. 1999]。这里我们报告一个心理物理的研究中,我们测试了是否观察员能够区分梯度由于不同光源位置,发现观察家可靠地检测到的变化当光源位置不同的梯度信息只有4度的参考场景(实验1)。这个敏感性主要是基于亮度梯度信息(实验2和3)。我们认为现实的一个场景一个全球的印象光照算法应准确地模拟间反射的亮度分量,而准确地表示色度的空间变化则不那么重要。
{"title":"On seeing and rendering colour gradients","authors":"A. Ruppertsberg, A. Hurlbert, Marina Bloj","doi":"10.1145/1272582.1272599","DOIUrl":"https://doi.org/10.1145/1272582.1272599","url":null,"abstract":"Ten years ago Greenberg and colleagues presented their framework for realistic image synthesis [Greenberg et al. 1997], aiming \"to develop physically based lighting models and perceptually based rendering procedures for computer graphics that will produce synthetic images that are visually and measurably indistinguishable from real-world images\", paraphrasing Sutherland's 'ultimate display' [Sutherland 1965]. They specifically encouraged vision researchers to use natural, complex and three-dimensional (3D) visual displays to get a better understanding of human vision and to develop more comprehensive visual models for computer graphics that will improve the efficiency of algorithms. In this paper we follow Greenberg et al.'s directive and analyse colour and luminance gradients in a complex 3D scene. The gradients arise from changes in the light source position and orientation of surfaces. Information in image gradients could apprise the visual system about intrinsic surface reflectance properties or extrinsic illumination phenomena, including shading, shadowing and inter-reflections. Colour gradients induced by inter-reflection may play a similar role to that of luminance gradients in shape-from-shading algorithms; it has been shown that 3D shape perception modulates the influence of inter-reflections on surface colour perception [Bloj et al. 1999]. Here we report a psychophysical study in which we tested whether observers were able to discriminate between gradients due to different light source positions and found that observers reliably detected a change in the gradient information when the light source position differed by only 4 deg from the reference scene (Experiment 1). This sensitivity was mainly based on the luminance information in the gradient (Experiment 2 and 3). We conclude that for a realistic impression of a scene a global illumination algorithm should model the luminance component of inter-reflections accurately, whereas it is less critical to accurately represent the spatial variation in chromaticity.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122104495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura C. Trutoiu, Silvia-Dana Marin, B. Mohler, C. Fennema
Vection is defined as the visually induced illusion of self motion [Fischer and Kornmüller 1930]. Previous research has suggested that linear vection (the illusion of self-translation) is harder to achieve than circular vection (the illusion of self-rotation) in both laboratory settings (typically using 2D stimuli such as black and white stripes) [Rieser 2006] and virtual environment setups [Schulte-Pelkum 2007; Mohler et al. 2005]. In real a life situation when experiencing circular vection all objects rotate around the observer with the same angular velocity. For linear motion, however, the change in the oberver position results in a change in the observed position of closer objects with respect to farther away objects or the background. This phenomenon, motion parallax, provides pictorial depth cues as closer objects appear to be moving faster compared to more distant objects.
向量被定义为视觉诱导的自我运动幻觉[Fischer and kornm ller 1930]。先前的研究表明,在实验室设置(通常使用二维刺激,如黑白条纹)[Rieser 2006]和虚拟环境设置[Schulte-Pelkum 2007;Mohler et al. 2005]。在现实生活中,当经历圆矢量时,所有物体都以相同的角速度围绕观察者旋转。然而,对于直线运动,观测者位置的变化导致较近物体相对于较远物体或背景的观察位置发生变化。这种现象,运动视差,提供了图像深度线索,因为较近的物体看起来比较远的物体移动得更快。
{"title":"Orthographic and perspective projection influences linear vection in large screen virtual environments","authors":"Laura C. Trutoiu, Silvia-Dana Marin, B. Mohler, C. Fennema","doi":"10.1145/1272582.1272622","DOIUrl":"https://doi.org/10.1145/1272582.1272622","url":null,"abstract":"Vection is defined as the visually induced illusion of self motion [Fischer and Kornmüller 1930]. Previous research has suggested that linear vection (the illusion of self-translation) is harder to achieve than circular vection (the illusion of self-rotation) in both laboratory settings (typically using 2D stimuli such as black and white stripes) [Rieser 2006] and virtual environment setups [Schulte-Pelkum 2007; Mohler et al. 2005]. In real a life situation when experiencing circular vection all objects rotate around the observer with the same angular velocity. For linear motion, however, the change in the oberver position results in a change in the observed position of closer objects with respect to farther away objects or the background. This phenomenon, motion parallax, provides pictorial depth cues as closer objects appear to be moving faster compared to more distant objects.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126942933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Sanders, G. Narasimham, B. Rump, T. McNamara, T. Carr, J. Rieser, Bobby Bodenheimer
Virtual Environments presented through head-mounted displays (HMDs) are often explored on foot. Exploration on foot is useful since the afferent and efferent cues of physical locomotion aid spatial awareness. However, the size of the virtual environment that can be explored on foot is limited to the dimensions of the tracking space of the HMD unless other strategies are used. This paper presents a system for exploring a large virtual environment on foot when the size of the physical surroundings is small by leveraging people's natural ability to spatially update. This paper presents three methods of "resetting" users when they reach the physical limits of the HMD tracking system. Resetting involves manipulating the users' location in physical space to move them out of the path of the physical obstruction while maintaining their spatial awareness of the virtual space.
{"title":"Exploring large virtual environments with an HMD when physical space is limited","authors":"B. Sanders, G. Narasimham, B. Rump, T. McNamara, T. Carr, J. Rieser, Bobby Bodenheimer","doi":"10.1145/1272582.1272590","DOIUrl":"https://doi.org/10.1145/1272582.1272590","url":null,"abstract":"Virtual Environments presented through head-mounted displays (HMDs) are often explored on foot. Exploration on foot is useful since the afferent and efferent cues of physical locomotion aid spatial awareness. However, the size of the virtual environment that can be explored on foot is limited to the dimensions of the tracking space of the HMD unless other strategies are used. This paper presents a system for exploring a large virtual environment on foot when the size of the physical surroundings is small by leveraging people's natural ability to spatially update. This paper presents three methods of \"resetting\" users when they reach the physical limits of the HMD tracking system. Resetting involves manipulating the users' location in physical space to move them out of the path of the physical obstruction while maintaining their spatial awareness of the virtual space.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132315572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Broadly speaking, the human visual system attempts to discount illumination so it can extract surface shapes and reflectance characteristics, independent of one's surroundings. This elucidates the purpose of vision, to quickly identify and categorize objects so the higher brain can seek opportunities and avoid obstacles and dangers that may be present. Low-level visual processing lies at the very root of survival, and is the product of hundreds of millions of years of animal evolution. It could be said that vision works to circumvent lighting.
{"title":"Dynamic range and visual perception","authors":"G. Ward","doi":"10.1145/1272582.1272597","DOIUrl":"https://doi.org/10.1145/1272582.1272597","url":null,"abstract":"Broadly speaking, the human visual system attempts to discount illumination so it can extract surface shapes and reflectance characteristics, independent of one's surroundings. This elucidates the purpose of vision, to quickly identify and categorize objects so the higher brain can seek opportunities and avoid obstacles and dangers that may be present. Low-level visual processing lies at the very root of survival, and is the product of hundreds of millions of years of animal evolution. It could be said that vision works to circumvent lighting.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125973327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Uncertainty constitutes a major obstacle to effective decision making. This work presents perceptual and cognitive principles from Tufte, Chambers and Bertin as well as results from user experiments for the theoretical evaluation of uncertainty visualization techniques that aid decision making. These principles can be used in future theoretical evaluations of existing or newly developed uncertainty visualization methods before usability testing with actual users.
{"title":"Cognitive evaluation of uncertainty visualization methods for decision making","authors":"M. Riveiro","doi":"10.1145/1272582.1272610","DOIUrl":"https://doi.org/10.1145/1272582.1272610","url":null,"abstract":"Uncertainty constitutes a major obstacle to effective decision making. This work presents perceptual and cognitive principles from Tufte, Chambers and Bertin as well as results from user experiments for the theoretical evaluation of uncertainty visualization techniques that aid decision making. These principles can be used in future theoretical evaluations of existing or newly developed uncertainty visualization methods before usability testing with actual users.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132071032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel perceptual method to reduce the visual redundancy of unstructured lumigraphs, an image based representation designed for interactive rendering. We combine features of the unstructured lumigraph algorithm and image fidelity metrics to efficiently rank the perceptual impact of the removal of sub-regions of input views (sub-views). We use a greedy approach to estimate the order in which sub-views should be pruned to minimize perceptual degradation at each step. Renderings using varying numbers of sub-views can then be easily visualized with confidence that the retained sub-views are well chosen, thus facilitating the choice of how many to retain. The regions of the input views that are left are repacked into a texture atlas. Our method takes advantage of any scene geometry information available but only requires a very coarse approximation. We perform a user study to validate its behaviour, as well as investigate the impact of the choice of image fidelity metric. The three metrics considered fall in the physical, statistical and perceptual categories. The overall benefit of our method is the semi-automation of the view selection process, resulting in unstructured lumigraphs that are thriftier in texture memory use and faster to render. (Note to reviewers: a video is available at http://isg.cs.tcd.ie/ymorvan/paper37.avi. The figure occupying the ninth page is intended to appear on a color plate.)
{"title":"A perceptual approach to trimming unstructured lumigraphs","authors":"Y. Morvan, C. O'Sullivan","doi":"10.1145/1272582.1272594","DOIUrl":"https://doi.org/10.1145/1272582.1272594","url":null,"abstract":"We present a novel perceptual method to reduce the visual redundancy of unstructured lumigraphs, an image based representation designed for interactive rendering. We combine features of the unstructured lumigraph algorithm and image fidelity metrics to efficiently rank the perceptual impact of the removal of sub-regions of input views (sub-views). We use a greedy approach to estimate the order in which sub-views should be pruned to minimize perceptual degradation at each step. Renderings using varying numbers of sub-views can then be easily visualized with confidence that the retained sub-views are well chosen, thus facilitating the choice of how many to retain. The regions of the input views that are left are repacked into a texture atlas. Our method takes advantage of any scene geometry information available but only requires a very coarse approximation. We perform a user study to validate its behaviour, as well as investigate the impact of the choice of image fidelity metric. The three metrics considered fall in the physical, statistical and perceptual categories. The overall benefit of our method is the semi-automation of the view selection process, resulting in unstructured lumigraphs that are thriftier in texture memory use and faster to render. (Note to reviewers: a video is available at http://isg.cs.tcd.ie/ymorvan/paper37.avi. The figure occupying the ninth page is intended to appear on a color plate.)","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"559 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130934811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Nusseck, J. Lagarde, B. Bardy, R. Fleming, H. Bülthoff
For humans, it is useful to be able to visually detect an object's physical properties. One potentially important source of information is the way the object moves and interacts with other objects in the environment. Here, we use computer simulations of a virtual ball bouncing on a horizontal plane to study the correspondence between our ability to estimate the ball's elasticity and to predict its future path. Three experiments were conducted to address (1) perception of the ball's elasticity, (2) interaction with the ball, and (3) prediction of its trajectory. The results suggest that different strategies and information sources are used for passive perception versus actively predicting future behavior.
{"title":"Perception and prediction of simple object interactions","authors":"M. Nusseck, J. Lagarde, B. Bardy, R. Fleming, H. Bülthoff","doi":"10.1145/1272582.1272587","DOIUrl":"https://doi.org/10.1145/1272582.1272587","url":null,"abstract":"For humans, it is useful to be able to visually detect an object's physical properties. One potentially important source of information is the way the object moves and interacts with other objects in the environment. Here, we use computer simulations of a virtual ball bouncing on a horizontal plane to study the correspondence between our ability to estimate the ball's elasticity and to predict its future path. Three experiments were conducted to address (1) perception of the ball's elasticity, (2) interaction with the ball, and (3) prediction of its trajectory. The results suggest that different strategies and information sources are used for passive perception versus actively predicting future behavior.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124247916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}