S. Mata, L. Pastor, José Juan Aliaga, Angel Rodríguez
The goal of this work is to propose a new automatic technique that makes use of the information obtained by means of a visual attention model for guiding the extraction of a simplified 3D model.
本文的目标是提出一种新的自动化技术,利用视觉注意模型获得的信息来指导简化三维模型的提取。
{"title":"Incorporating visual attention into mesh simplification techniques","authors":"S. Mata, L. Pastor, José Juan Aliaga, Angel Rodríguez","doi":"10.1145/1272582.1272611","DOIUrl":"https://doi.org/10.1145/1272582.1272611","url":null,"abstract":"The goal of this work is to propose a new automatic technique that makes use of the information obtained by means of a visual attention model for guiding the extraction of a simplified 3D model.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123077855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This book contains the proceedings of the Fourth Symposium on Applied Perception in Graphics and Visualization, which was held in Tubingen, Germany on July 25-27, 2007. APGV is an annual event, sponsored by ACM SIGGRAPH, which brings together researchers from the fields of perception, graphics and visualization. The general goals are to use insights from perception to advance the design of methods for visual, auditory and haptic representation, and to use computer graphics to enable perceptual research that would otherwise not be possible. We received 39 full paper submissions for this year's AGPV. Each submission was reviewed by at least three members of the Program Committee, and we decided to accept 17 of these as full papers, to be presented as Oral presentations at the conference (14 as long papers, and 3 as short papers). The Proceedings also include 15 one-page abstracts describing Poster presentations. The posters include summaries of paper submissions that were not accepted for Oral presentation, as well as separate poster submissions. The Oral Papers cover a wide range of topics. We have classified the papers into four categories, corresponding to the sessions: Faces and Animation, Virtual Environments and Space Perception, Rendering and Surfaces I and II, and Images and Displays. For the first time at APGV this year we have a Keynote Speaker, Greg Ward (Dolby Canada), whose talk is entitled "Dynamic Range and Visual Perception". Greg Ward is a pioneer in global illumination and high dynamic range imaging, and his work has drawn heavily from and contributed substantially to research on human vision. We also have several other Invited Speakers: Volker Blanz (University of Siegen), Oliver Bimber (University of Weimar), Philip Dutre (University of Leuven) and Rafal Mantiuk (Max Planck Institute for Computer Science in Saarbrucken).
{"title":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","authors":"C. Wallraven, V. Sundstedt","doi":"10.1145/1272582","DOIUrl":"https://doi.org/10.1145/1272582","url":null,"abstract":"This book contains the proceedings of the Fourth Symposium on Applied Perception in Graphics and Visualization, which was held in Tubingen, Germany on July 25-27, 2007. APGV is an annual event, sponsored by ACM SIGGRAPH, which brings together researchers from the fields of perception, graphics and visualization. The general goals are to use insights from perception to advance the design of methods for visual, auditory and haptic representation, and to use computer graphics to enable perceptual research that would otherwise not be possible. \u0000 \u0000We received 39 full paper submissions for this year's AGPV. Each submission was reviewed by at least three members of the Program Committee, and we decided to accept 17 of these as full papers, to be presented as Oral presentations at the conference (14 as long papers, and 3 as short papers). The Proceedings also include 15 one-page abstracts describing Poster presentations. The posters include summaries of paper submissions that were not accepted for Oral presentation, as well as separate poster submissions. \u0000 \u0000The Oral Papers cover a wide range of topics. We have classified the papers into four categories, corresponding to the sessions: Faces and Animation, Virtual Environments and Space Perception, Rendering and Surfaces I and II, and Images and Displays. \u0000 \u0000For the first time at APGV this year we have a Keynote Speaker, Greg Ward (Dolby Canada), whose talk is entitled \"Dynamic Range and Visual Perception\". Greg Ward is a pioneer in global illumination and high dynamic range imaging, and his work has drawn heavily from and contributed substantially to research on human vision. We also have several other Invited Speakers: Volker Blanz (University of Siegen), Oliver Bimber (University of Weimar), Philip Dutre (University of Leuven) and Rafal Mantiuk (Max Planck Institute for Computer Science in Saarbrucken).","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117066040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research on the perception of dynamic faces often requires real-time animations with low latency. With an adaptation of principal feature analysis [Cohen et al. 2002], we can reduce the number of facial motion capture markers by 50%, while retaining the overall animation quality.
{"title":"Redundancy reduction in 3D facial motion capture data for animation","authors":"Daniela I. Wellein, Cristóbal Curio, H. Bülthoff","doi":"10.1145/1272582.1272613","DOIUrl":"https://doi.org/10.1145/1272582.1272613","url":null,"abstract":"Research on the perception of dynamic faces often requires real-time animations with low latency. With an adaptation of principal feature analysis [Cohen et al. 2002], we can reduce the number of facial motion capture markers by 50%, while retaining the overall animation quality.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122002254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Interrante, Lee Anderson, B. Ries, Eleanor O'Rourke, Leanne Gray
Through the use of Immersive Virtual Environments (IVE) technology, we seek to enable participants to experience a computer-represented environment in the same way as they would if it were actually real. Although the state of the technology is not sufficient, at present, to mimic the experience of reality with such fidelity as to raise ambiguity in peoples' minds about whether the environment that they are immersed in is real or virtual, there are many indications that IVEs in their present state can be successfully used as a substitute for real environments for many purposes, including job training, psychotherapy and social psychology. Much recent and historical research in the field of virtual reality (VR) has focused on the question of how much and what type of fidelity (visual, auditory, haptic, proprioceptive, etc.) we need to maintain between a virtual and real experience in order to enable participants to achieve similar results from their VR experience as they would in reality.
{"title":"Experimental investigations into the feasibility of using augmented walking to facilitate the intuitive exploration of large scale immersive virtual environments","authors":"V. Interrante, Lee Anderson, B. Ries, Eleanor O'Rourke, Leanne Gray","doi":"10.1145/1272582.1272621","DOIUrl":"https://doi.org/10.1145/1272582.1272621","url":null,"abstract":"Through the use of Immersive Virtual Environments (IVE) technology, we seek to enable participants to experience a computer-represented environment in the same way as they would if it were actually real. Although the state of the technology is not sufficient, at present, to mimic the experience of reality with such fidelity as to raise ambiguity in peoples' minds about whether the environment that they are immersed in is real or virtual, there are many indications that IVEs in their present state can be successfully used as a substitute for real environments for many purposes, including job training, psychotherapy and social psychology. Much recent and historical research in the field of virtual reality (VR) has focused on the question of how much and what type of fidelity (visual, auditory, haptic, proprioceptive, etc.) we need to maintain between a virtual and real experience in order to enable participants to achieve similar results from their VR experience as they would in reality.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132790141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rita T. Griesser, D. Cunningham, C. Wallraven, H. Bülthoff
The human face is capable of producing a large variety of facial expressions that supply important information for communication. As was shown in previous studies using unmanipulated video sequences, movements of single regions like mouth, eyes, and eyebrows as well as rigid head motion play a decisive role in the recognition of conversational facial expressions. Here, flexible but at the same time realistic computer animated faces were used to investigate the spatiotemporal coaction of facial movements systematically. For three psychophysical experiments, spatiotemporal properties were manipulated in a highly controlled manner. First, single regions (mouth, eyes, and eyebrows) of a computer animated face performing seven basic facial expressions were selected. These single regions, as well as combinations of these regions, were animated for each of the seven chosen facial expressions. Participants were then asked to recognize these animated expressions in the experiments. The findings show that the animated avatar in general is a useful tool for the investigation of facial expressions, although improvements have to be made to reach a higher recognition accuracy of certain expressions. Furthermore, the results shed light on the importance and interplay of individual facial regions for recognition. With this knowledge the perceptual quality of computer animations can be improved in order to reach a higher level of realism and effectiveness.
{"title":"Psychophysical investigation of facial expressions using computer animated faces","authors":"Rita T. Griesser, D. Cunningham, C. Wallraven, H. Bülthoff","doi":"10.1145/1272582.1272585","DOIUrl":"https://doi.org/10.1145/1272582.1272585","url":null,"abstract":"The human face is capable of producing a large variety of facial expressions that supply important information for communication. As was shown in previous studies using unmanipulated video sequences, movements of single regions like mouth, eyes, and eyebrows as well as rigid head motion play a decisive role in the recognition of conversational facial expressions. Here, flexible but at the same time realistic computer animated faces were used to investigate the spatiotemporal coaction of facial movements systematically. For three psychophysical experiments, spatiotemporal properties were manipulated in a highly controlled manner. First, single regions (mouth, eyes, and eyebrows) of a computer animated face performing seven basic facial expressions were selected. These single regions, as well as combinations of these regions, were animated for each of the seven chosen facial expressions. Participants were then asked to recognize these animated expressions in the experiments. The findings show that the animated avatar in general is a useful tool for the investigation of facial expressions, although improvements have to be made to reach a higher recognition accuracy of certain expressions. Furthermore, the results shed light on the importance and interplay of individual facial regions for recognition. With this knowledge the perceptual quality of computer animations can be improved in order to reach a higher level of realism and effectiveness.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133050797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We applied fundamental perceptual and physico-mathematical studies to a fast method for luminance remapping of 2D texture maps which enhances perceived surface roughness in comparison with conventional 2D texture mapping. The fundamental physical mechanism underlying the method is the fact that texture contrast increases as the incident illumination tends towards grazing for rough matte surfaces, actually "exploding" near the shadow edge [Pont and Koenderink 2005]. A psychophysical study by Ho et al. [Ho et al. 2006] confirmed that human observers use texture contrast as a cue for relief-depth or surface roughness. Thus, 2D texture-mapped objects will appear to have a rougher surface if the texture contrast is increased as a function of the local illumination angle. In particular, we increase the bidirectional texture contrast in close accordance with the contrast gradients measured for real objects with rough surfaces. The method presented works well for random textures of locally-matte surfaces if the original texture does not have a contrast that is too high. This modification is in addition to the usual attenuation of the surface irradiance due to the angle of the incident illumination and the computational costs of the technique are similar to that of conventional diffuse shading. This low cost makes it straightforward to implement the technique with real-time shaders which allow interactive rendering on modern graphics hardware.
我们将基本的感知和物理数学研究应用于二维纹理映射的亮度重映射的快速方法,与传统的二维纹理映射相比,该方法增强了感知表面粗糙度。该方法的基本物理机制是这样一个事实,即纹理对比度随着入射光照趋向于粗糙的哑光表面而增加,实际上是在阴影边缘附近“爆炸”[Pont and Koenderink 2005]。Ho等人的一项心理物理学研究[Ho et al. 2006]证实,人类观察者使用纹理对比度作为浮雕深度或表面粗糙度的线索。因此,如果纹理对比度作为局部照明角度的函数增加,2D纹理映射对象将看起来具有更粗糙的表面。特别是,我们增加了双向纹理对比度,接近于对具有粗糙表面的真实物体测量的对比度梯度。对于局部哑光表面的随机纹理,如果原始纹理没有太高的对比度,所提出的方法可以很好地工作。这种修改是除了通常的表面辐照度衰减之外,由于入射照明的角度,该技术的计算成本与传统的漫射着色相似。这种低成本使得实现实时着色器的技术非常简单,它允许在现代图形硬件上进行交互式渲染。
{"title":"2-1/2D texture mapping: real-time perceptual surface roughening","authors":"S. Pont, P. Sen, P. Hanrahan","doi":"10.1145/1272582.1272595","DOIUrl":"https://doi.org/10.1145/1272582.1272595","url":null,"abstract":"We applied fundamental perceptual and physico-mathematical studies to a fast method for luminance remapping of 2D texture maps which enhances perceived surface roughness in comparison with conventional 2D texture mapping. The fundamental physical mechanism underlying the method is the fact that texture contrast increases as the incident illumination tends towards grazing for rough matte surfaces, actually \"exploding\" near the shadow edge [Pont and Koenderink 2005]. A psychophysical study by Ho et al. [Ho et al. 2006] confirmed that human observers use texture contrast as a cue for relief-depth or surface roughness. Thus, 2D texture-mapped objects will appear to have a rougher surface if the texture contrast is increased as a function of the local illumination angle. In particular, we increase the bidirectional texture contrast in close accordance with the contrast gradients measured for real objects with rough surfaces. The method presented works well for random textures of locally-matte surfaces if the original texture does not have a contrast that is too high. This modification is in addition to the usual attenuation of the surface irradiance due to the angle of the incident illumination and the computational costs of the technique are similar to that of conventional diffuse shading. This low cost makes it straightforward to implement the technique with real-time shaders which allow interactive rendering on modern graphics hardware.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"7 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120969843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The relevance of aviation security has increased dramatically at the beginning of this century. One of the most important tasks is the visual inspection of passenger bags using x-ray machines. In this study, we investigated the role of image based factors on human detection of prohibited items in x-ray images. Schwaninger, Hardmeier, and Hofer (2004, 2005) have identified three image based factors: View Difficulty, Superposition and Bag Complexity. This article consists of 4 experiments which lead to the development of a statistical model that is able to predict image difficulty based on these image based factors. Experiment 1 is a replication of earlier findings confirming the relevance of image based factors as defined by Schwaninger et al. (2005) on x-ray detection performance. In Experiment 2, we found significant correlations between human ratings of image based factors and human detection performance. In Experiment 3, we introduced our image measurements and found significant correlations between them and human detection performance. Moreover, significant correlations were found between our image measurements and corresponding human ratings, indicating high perceptual plausibility. In Experiment 4, it was shown using multiple linear regression analysis that our image measurements can predict human performance as well as human ratings can. Applications of a computational model for threat image projection systems and for adaptive computer-based training are discussed.
{"title":"A statistical approach for image difficulty estimation in x-ray screening using image measurements","authors":"A. Schwaninger, S. Michel, A. Bolfing","doi":"10.1145/1272582.1272606","DOIUrl":"https://doi.org/10.1145/1272582.1272606","url":null,"abstract":"The relevance of aviation security has increased dramatically at the beginning of this century. One of the most important tasks is the visual inspection of passenger bags using x-ray machines. In this study, we investigated the role of image based factors on human detection of prohibited items in x-ray images. Schwaninger, Hardmeier, and Hofer (2004, 2005) have identified three image based factors: View Difficulty, Superposition and Bag Complexity. This article consists of 4 experiments which lead to the development of a statistical model that is able to predict image difficulty based on these image based factors. Experiment 1 is a replication of earlier findings confirming the relevance of image based factors as defined by Schwaninger et al. (2005) on x-ray detection performance. In Experiment 2, we found significant correlations between human ratings of image based factors and human detection performance. In Experiment 3, we introduced our image measurements and found significant correlations between them and human detection performance. Moreover, significant correlations were found between our image measurements and corresponding human ratings, indicating high perceptual plausibility. In Experiment 4, it was shown using multiple linear regression analysis that our image measurements can predict human performance as well as human ratings can. Applications of a computational model for threat image projection systems and for adaptive computer-based training are discussed.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116070098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Schwaninger, S. Schuhmacher, H. Bülthoff, C. Wallraven
Everyday life requires us to recognize faces under transient changes in pose, expression and lighting conditions. Despite this, humans are adept at recognizing familiar faces. In this study, we focused on determining the types of information human observers use to recognize faces across variations in viewpoint. Of specific interest was whether holistic information is used exclusively, or whether the local information contained in facial parts (featural or component information), as well as their spatial relationships (configural information) is also encoded. A rigorous study investigating this question has not previously been possible, as the generation of a suitable set of stimuli using standard image manipulation techniques was not feasible. A 3D database of faces that have been processed to extract morphable models (Blanz & Vetter, 1999) allows us to generate such stimuli efficiently and with a high degree of control over display parameters. Three experiments were conducted, modeled after the inter-extra-ortho experiments by Bülthoff & Edelman, 1992. The first experiment served as a baseline for the subsequent two experiments. Ten face-stimuli were presented from a frontal view and from a 45° side view. At test, they had to be recognized among ten distractor faces shown from different viewpoints. We found systematic effects of viewpoint, in that the recognition performance increased as the angle between the learned view and the tested view decreased. This finding is consistent with face processing models based on 2D-view interpolation. Experiments 2 and 3 were the same as Experiment 1 expect for the fact that in the testing phase, the faces were presented scrambled or blurred. Scrambling was used to isolate featural from configural information. Blurring was used to provide stimuli in which local featural information was reduced. The results demonstrated that human observers are capable of recognizing faces across different viewpoints on the sole basis of isolated featural information and of isolated configural information.
{"title":"Using 3D computer graphics for perception: the role of local and global information in face processing","authors":"A. Schwaninger, S. Schuhmacher, H. Bülthoff, C. Wallraven","doi":"10.1145/1272582.1272586","DOIUrl":"https://doi.org/10.1145/1272582.1272586","url":null,"abstract":"Everyday life requires us to recognize faces under transient changes in pose, expression and lighting conditions. Despite this, humans are adept at recognizing familiar faces. In this study, we focused on determining the types of information human observers use to recognize faces across variations in viewpoint. Of specific interest was whether holistic information is used exclusively, or whether the local information contained in facial parts (featural or component information), as well as their spatial relationships (configural information) is also encoded. A rigorous study investigating this question has not previously been possible, as the generation of a suitable set of stimuli using standard image manipulation techniques was not feasible. A 3D database of faces that have been processed to extract morphable models (Blanz & Vetter, 1999) allows us to generate such stimuli efficiently and with a high degree of control over display parameters. Three experiments were conducted, modeled after the inter-extra-ortho experiments by Bülthoff & Edelman, 1992. The first experiment served as a baseline for the subsequent two experiments. Ten face-stimuli were presented from a frontal view and from a 45° side view. At test, they had to be recognized among ten distractor faces shown from different viewpoints. We found systematic effects of viewpoint, in that the recognition performance increased as the angle between the learned view and the tested view decreased. This finding is consistent with face processing models based on 2D-view interpolation. Experiments 2 and 3 were the same as Experiment 1 expect for the fact that in the testing phase, the faces were presented scrambled or blurred. Scrambling was used to isolate featural from configural information. Blurring was used to provide stimuli in which local featural information was reduced. The results demonstrated that human observers are capable of recognizing faces across different viewpoints on the sole basis of isolated featural information and of isolated configural information.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122416240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
3D models are subject to a wide variety of processing operations such as compression, simplification or watermarking, which introduce slight geometric modifications on the shape. The main issue is to maximize the compression/simplification ratio or the watermark strength while minimizing these visual degradations. However few algorithms exploit the human visual system to hide these degradations, while perceptual attributes could be quite relevant for this task. Particularly, the Masking Effect defines the fact that a signal can be masked by the presence of another signal with similar frequency or orientation. In this context we introduce the notion of roughness for a 3D mesh, as a local measure of geometric noise on the surface. Indeed, a textured (or rough) region is able to hide geometric distortions much better than a smooth one. Our measure is based on curvature analysis on local windows of the mesh and is independent of the resolution/connectivity of the object. An application to Visual Masking is presented and discussed.
{"title":"A roughness measure for 3D mesh visual masking","authors":"G. Lavoué","doi":"10.1145/1272582.1272593","DOIUrl":"https://doi.org/10.1145/1272582.1272593","url":null,"abstract":"3D models are subject to a wide variety of processing operations such as compression, simplification or watermarking, which introduce slight geometric modifications on the shape. The main issue is to maximize the compression/simplification ratio or the watermark strength while minimizing these visual degradations. However few algorithms exploit the human visual system to hide these degradations, while perceptual attributes could be quite relevant for this task. Particularly, the Masking Effect defines the fact that a signal can be masked by the presence of another signal with similar frequency or orientation. In this context we introduce the notion of roughness for a 3D mesh, as a local measure of geometric noise on the surface. Indeed, a textured (or rough) region is able to hide geometric distortions much better than a smooth one. Our measure is based on curvature analysis on local windows of the mesh and is independent of the resolution/connectivity of the object. An application to Visual Masking is presented and discussed.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123526488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}