首页 > 最新文献

Proceedings of the 4th symposium on Applied perception in graphics and visualization最新文献

英文 中文
Incorporating visual attention into mesh simplification techniques 将视觉注意力纳入网格简化技术
S. Mata, L. Pastor, José Juan Aliaga, Angel Rodríguez
The goal of this work is to propose a new automatic technique that makes use of the information obtained by means of a visual attention model for guiding the extraction of a simplified 3D model.
本文的目标是提出一种新的自动化技术,利用视觉注意模型获得的信息来指导简化三维模型的提取。
{"title":"Incorporating visual attention into mesh simplification techniques","authors":"S. Mata, L. Pastor, José Juan Aliaga, Angel Rodríguez","doi":"10.1145/1272582.1272611","DOIUrl":"https://doi.org/10.1145/1272582.1272611","url":null,"abstract":"The goal of this work is to propose a new automatic technique that makes use of the information obtained by means of a visual attention model for guiding the extraction of a simplified 3D model.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123077855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Proceedings of the 4th symposium on Applied perception in graphics and visualization 第四届图形与可视化应用感知学术研讨会论文集
C. Wallraven, V. Sundstedt
This book contains the proceedings of the Fourth Symposium on Applied Perception in Graphics and Visualization, which was held in Tubingen, Germany on July 25-27, 2007. APGV is an annual event, sponsored by ACM SIGGRAPH, which brings together researchers from the fields of perception, graphics and visualization. The general goals are to use insights from perception to advance the design of methods for visual, auditory and haptic representation, and to use computer graphics to enable perceptual research that would otherwise not be possible. We received 39 full paper submissions for this year's AGPV. Each submission was reviewed by at least three members of the Program Committee, and we decided to accept 17 of these as full papers, to be presented as Oral presentations at the conference (14 as long papers, and 3 as short papers). The Proceedings also include 15 one-page abstracts describing Poster presentations. The posters include summaries of paper submissions that were not accepted for Oral presentation, as well as separate poster submissions. The Oral Papers cover a wide range of topics. We have classified the papers into four categories, corresponding to the sessions: Faces and Animation, Virtual Environments and Space Perception, Rendering and Surfaces I and II, and Images and Displays. For the first time at APGV this year we have a Keynote Speaker, Greg Ward (Dolby Canada), whose talk is entitled "Dynamic Range and Visual Perception". Greg Ward is a pioneer in global illumination and high dynamic range imaging, and his work has drawn heavily from and contributed substantially to research on human vision. We also have several other Invited Speakers: Volker Blanz (University of Siegen), Oliver Bimber (University of Weimar), Philip Dutre (University of Leuven) and Rafal Mantiuk (Max Planck Institute for Computer Science in Saarbrucken).
本书收录了2007年7月25日至27日在德国蒂宾根举行的第四届图形和可视化应用感知研讨会的会议记录。APGV是由ACM SIGGRAPH赞助的年度活动,汇集了来自感知,图形和可视化领域的研究人员。总体目标是利用来自感知的见解来推进视觉、听觉和触觉表征方法的设计,并使用计算机图形学来实现原本不可能实现的感知研究。我们收到了今年AGPV的39篇论文全文。每篇论文至少由三名计划委员会成员审阅,我们决定接受其中的17篇作为完整论文,在会议上作为口头报告(14篇为长论文,3篇为短论文)。会议记录还包括15个描述海报演示的一页摘要。海报包括未被口头报告接受的论文摘要,以及单独的海报提交。口语考试涵盖了广泛的主题。我们将论文分为四类,对应于会议:面孔和动画,虚拟环境和空间感知,渲染和表面I和II,图像和显示。在今年的APGV上,我们第一次邀请到一位主讲嘉宾Greg Ward(杜比加拿大),他的演讲题目是“动态范围和视觉感知”。格雷格·沃德是全球照明和高动态范围成像的先驱,他的工作从人类视觉研究中汲取了大量的经验,并为人类视觉研究做出了重大贡献。我们还有其他几位特邀演讲者:Volker Blanz(锡根大学),Oliver Bimber(魏玛大学),Philip Dutre(鲁汶大学)和Rafal Mantiuk(萨尔布吕肯马克斯普朗克计算机科学研究所)。
{"title":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","authors":"C. Wallraven, V. Sundstedt","doi":"10.1145/1272582","DOIUrl":"https://doi.org/10.1145/1272582","url":null,"abstract":"This book contains the proceedings of the Fourth Symposium on Applied Perception in Graphics and Visualization, which was held in Tubingen, Germany on July 25-27, 2007. APGV is an annual event, sponsored by ACM SIGGRAPH, which brings together researchers from the fields of perception, graphics and visualization. The general goals are to use insights from perception to advance the design of methods for visual, auditory and haptic representation, and to use computer graphics to enable perceptual research that would otherwise not be possible. \u0000 \u0000We received 39 full paper submissions for this year's AGPV. Each submission was reviewed by at least three members of the Program Committee, and we decided to accept 17 of these as full papers, to be presented as Oral presentations at the conference (14 as long papers, and 3 as short papers). The Proceedings also include 15 one-page abstracts describing Poster presentations. The posters include summaries of paper submissions that were not accepted for Oral presentation, as well as separate poster submissions. \u0000 \u0000The Oral Papers cover a wide range of topics. We have classified the papers into four categories, corresponding to the sessions: Faces and Animation, Virtual Environments and Space Perception, Rendering and Surfaces I and II, and Images and Displays. \u0000 \u0000For the first time at APGV this year we have a Keynote Speaker, Greg Ward (Dolby Canada), whose talk is entitled \"Dynamic Range and Visual Perception\". Greg Ward is a pioneer in global illumination and high dynamic range imaging, and his work has drawn heavily from and contributed substantially to research on human vision. We also have several other Invited Speakers: Volker Blanz (University of Siegen), Oliver Bimber (University of Weimar), Philip Dutre (University of Leuven) and Rafal Mantiuk (Max Planck Institute for Computer Science in Saarbrucken).","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117066040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Redundancy reduction in 3D facial motion capture data for animation 三维动画面部动作捕捉数据的冗余减少
Daniela I. Wellein, Cristóbal Curio, H. Bülthoff
Research on the perception of dynamic faces often requires real-time animations with low latency. With an adaptation of principal feature analysis [Cohen et al. 2002], we can reduce the number of facial motion capture markers by 50%, while retaining the overall animation quality.
动态人脸感知的研究往往需要低延迟的实时动画。通过对主特征分析的适应[Cohen等人,2002],我们可以将面部动作捕捉标记的数量减少50%,同时保持整体动画质量。
{"title":"Redundancy reduction in 3D facial motion capture data for animation","authors":"Daniela I. Wellein, Cristóbal Curio, H. Bülthoff","doi":"10.1145/1272582.1272613","DOIUrl":"https://doi.org/10.1145/1272582.1272613","url":null,"abstract":"Research on the perception of dynamic faces often requires real-time animations with low latency. With an adaptation of principal feature analysis [Cohen et al. 2002], we can reduce the number of facial motion capture markers by 50%, while retaining the overall animation quality.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122002254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experimental investigations into the feasibility of using augmented walking to facilitate the intuitive exploration of large scale immersive virtual environments 实验研究了使用增强步行来促进大规模沉浸式虚拟环境的直观探索的可行性
V. Interrante, Lee Anderson, B. Ries, Eleanor O'Rourke, Leanne Gray
Through the use of Immersive Virtual Environments (IVE) technology, we seek to enable participants to experience a computer-represented environment in the same way as they would if it were actually real. Although the state of the technology is not sufficient, at present, to mimic the experience of reality with such fidelity as to raise ambiguity in peoples' minds about whether the environment that they are immersed in is real or virtual, there are many indications that IVEs in their present state can be successfully used as a substitute for real environments for many purposes, including job training, psychotherapy and social psychology. Much recent and historical research in the field of virtual reality (VR) has focused on the question of how much and what type of fidelity (visual, auditory, haptic, proprioceptive, etc.) we need to maintain between a virtual and real experience in order to enable participants to achieve similar results from their VR experience as they would in reality.
通过使用沉浸式虚拟环境(IVE)技术,我们试图让参与者以与真实环境相同的方式体验计算机代表的环境。虽然目前的技术水平还不足以逼真地模拟现实的体验,从而使人们对他们所沉浸的环境是真实的还是虚拟的产生歧义,但有许多迹象表明,目前状态的人工智能可以成功地用作真实环境的替代品,用于许多目的,包括职业培训、心理治疗和社会心理学。虚拟现实(VR)领域最近和历史上的许多研究都集中在一个问题上,即我们需要在虚拟和真实体验之间保持多少和什么样的保真度(视觉、听觉、触觉、本体感受等),才能使参与者从他们的VR体验中获得与现实中相似的结果。
{"title":"Experimental investigations into the feasibility of using augmented walking to facilitate the intuitive exploration of large scale immersive virtual environments","authors":"V. Interrante, Lee Anderson, B. Ries, Eleanor O'Rourke, Leanne Gray","doi":"10.1145/1272582.1272621","DOIUrl":"https://doi.org/10.1145/1272582.1272621","url":null,"abstract":"Through the use of Immersive Virtual Environments (IVE) technology, we seek to enable participants to experience a computer-represented environment in the same way as they would if it were actually real. Although the state of the technology is not sufficient, at present, to mimic the experience of reality with such fidelity as to raise ambiguity in peoples' minds about whether the environment that they are immersed in is real or virtual, there are many indications that IVEs in their present state can be successfully used as a substitute for real environments for many purposes, including job training, psychotherapy and social psychology. Much recent and historical research in the field of virtual reality (VR) has focused on the question of how much and what type of fidelity (visual, auditory, haptic, proprioceptive, etc.) we need to maintain between a virtual and real experience in order to enable participants to achieve similar results from their VR experience as they would in reality.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132790141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Psychophysical investigation of facial expressions using computer animated faces 使用电脑动画面部表情的心理物理研究
Rita T. Griesser, D. Cunningham, C. Wallraven, H. Bülthoff
The human face is capable of producing a large variety of facial expressions that supply important information for communication. As was shown in previous studies using unmanipulated video sequences, movements of single regions like mouth, eyes, and eyebrows as well as rigid head motion play a decisive role in the recognition of conversational facial expressions. Here, flexible but at the same time realistic computer animated faces were used to investigate the spatiotemporal coaction of facial movements systematically. For three psychophysical experiments, spatiotemporal properties were manipulated in a highly controlled manner. First, single regions (mouth, eyes, and eyebrows) of a computer animated face performing seven basic facial expressions were selected. These single regions, as well as combinations of these regions, were animated for each of the seven chosen facial expressions. Participants were then asked to recognize these animated expressions in the experiments. The findings show that the animated avatar in general is a useful tool for the investigation of facial expressions, although improvements have to be made to reach a higher recognition accuracy of certain expressions. Furthermore, the results shed light on the importance and interplay of individual facial regions for recognition. With this knowledge the perceptual quality of computer animations can be improved in order to reach a higher level of realism and effectiveness.
人脸能够产生各种各样的面部表情,为交流提供重要信息。正如之前使用未经处理的视频序列的研究所显示的那样,单个区域的运动,如嘴、眼睛和眉毛,以及僵硬的头部运动,在对话面部表情的识别中起着决定性作用。本文采用灵活而又逼真的计算机动画人脸,系统地研究了面部运动的时空协同作用。在三个心理物理实验中,时空特性以高度可控的方式被操纵。首先,选择计算机动画面部的单个区域(嘴,眼睛和眉毛),表现出七种基本的面部表情。这些单独的区域,以及这些区域的组合,会被7个选定的面部表情中的每一个激活。然后,参与者被要求在实验中识别这些动画表情。研究结果表明,动画化身总体上是研究面部表情的一个有用工具,尽管需要改进才能达到对某些表情的更高识别精度。此外,研究结果揭示了个体面部区域对识别的重要性和相互作用。有了这些知识,计算机动画的感知质量可以得到改善,以达到更高的真实感和有效性。
{"title":"Psychophysical investigation of facial expressions using computer animated faces","authors":"Rita T. Griesser, D. Cunningham, C. Wallraven, H. Bülthoff","doi":"10.1145/1272582.1272585","DOIUrl":"https://doi.org/10.1145/1272582.1272585","url":null,"abstract":"The human face is capable of producing a large variety of facial expressions that supply important information for communication. As was shown in previous studies using unmanipulated video sequences, movements of single regions like mouth, eyes, and eyebrows as well as rigid head motion play a decisive role in the recognition of conversational facial expressions. Here, flexible but at the same time realistic computer animated faces were used to investigate the spatiotemporal coaction of facial movements systematically. For three psychophysical experiments, spatiotemporal properties were manipulated in a highly controlled manner. First, single regions (mouth, eyes, and eyebrows) of a computer animated face performing seven basic facial expressions were selected. These single regions, as well as combinations of these regions, were animated for each of the seven chosen facial expressions. Participants were then asked to recognize these animated expressions in the experiments. The findings show that the animated avatar in general is a useful tool for the investigation of facial expressions, although improvements have to be made to reach a higher recognition accuracy of certain expressions. Furthermore, the results shed light on the importance and interplay of individual facial regions for recognition. With this knowledge the perceptual quality of computer animations can be improved in order to reach a higher level of realism and effectiveness.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133050797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
2-1/2D texture mapping: real-time perceptual surface roughening 2-1/2D纹理映射:实时感知表面粗化
S. Pont, P. Sen, P. Hanrahan
We applied fundamental perceptual and physico-mathematical studies to a fast method for luminance remapping of 2D texture maps which enhances perceived surface roughness in comparison with conventional 2D texture mapping. The fundamental physical mechanism underlying the method is the fact that texture contrast increases as the incident illumination tends towards grazing for rough matte surfaces, actually "exploding" near the shadow edge [Pont and Koenderink 2005]. A psychophysical study by Ho et al. [Ho et al. 2006] confirmed that human observers use texture contrast as a cue for relief-depth or surface roughness. Thus, 2D texture-mapped objects will appear to have a rougher surface if the texture contrast is increased as a function of the local illumination angle. In particular, we increase the bidirectional texture contrast in close accordance with the contrast gradients measured for real objects with rough surfaces. The method presented works well for random textures of locally-matte surfaces if the original texture does not have a contrast that is too high. This modification is in addition to the usual attenuation of the surface irradiance due to the angle of the incident illumination and the computational costs of the technique are similar to that of conventional diffuse shading. This low cost makes it straightforward to implement the technique with real-time shaders which allow interactive rendering on modern graphics hardware.
我们将基本的感知和物理数学研究应用于二维纹理映射的亮度重映射的快速方法,与传统的二维纹理映射相比,该方法增强了感知表面粗糙度。该方法的基本物理机制是这样一个事实,即纹理对比度随着入射光照趋向于粗糙的哑光表面而增加,实际上是在阴影边缘附近“爆炸”[Pont and Koenderink 2005]。Ho等人的一项心理物理学研究[Ho et al. 2006]证实,人类观察者使用纹理对比度作为浮雕深度或表面粗糙度的线索。因此,如果纹理对比度作为局部照明角度的函数增加,2D纹理映射对象将看起来具有更粗糙的表面。特别是,我们增加了双向纹理对比度,接近于对具有粗糙表面的真实物体测量的对比度梯度。对于局部哑光表面的随机纹理,如果原始纹理没有太高的对比度,所提出的方法可以很好地工作。这种修改是除了通常的表面辐照度衰减之外,由于入射照明的角度,该技术的计算成本与传统的漫射着色相似。这种低成本使得实现实时着色器的技术非常简单,它允许在现代图形硬件上进行交互式渲染。
{"title":"2-1/2D texture mapping: real-time perceptual surface roughening","authors":"S. Pont, P. Sen, P. Hanrahan","doi":"10.1145/1272582.1272595","DOIUrl":"https://doi.org/10.1145/1272582.1272595","url":null,"abstract":"We applied fundamental perceptual and physico-mathematical studies to a fast method for luminance remapping of 2D texture maps which enhances perceived surface roughness in comparison with conventional 2D texture mapping. The fundamental physical mechanism underlying the method is the fact that texture contrast increases as the incident illumination tends towards grazing for rough matte surfaces, actually \"exploding\" near the shadow edge [Pont and Koenderink 2005]. A psychophysical study by Ho et al. [Ho et al. 2006] confirmed that human observers use texture contrast as a cue for relief-depth or surface roughness. Thus, 2D texture-mapped objects will appear to have a rougher surface if the texture contrast is increased as a function of the local illumination angle. In particular, we increase the bidirectional texture contrast in close accordance with the contrast gradients measured for real objects with rough surfaces. The method presented works well for random textures of locally-matte surfaces if the original texture does not have a contrast that is too high. This modification is in addition to the usual attenuation of the surface irradiance due to the angle of the incident illumination and the computational costs of the technique are similar to that of conventional diffuse shading. This low cost makes it straightforward to implement the technique with real-time shaders which allow interactive rendering on modern graphics hardware.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"7 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120969843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A statistical approach for image difficulty estimation in x-ray screening using image measurements 基于图像测量的x射线筛查中图像难度估计的统计方法
A. Schwaninger, S. Michel, A. Bolfing
The relevance of aviation security has increased dramatically at the beginning of this century. One of the most important tasks is the visual inspection of passenger bags using x-ray machines. In this study, we investigated the role of image based factors on human detection of prohibited items in x-ray images. Schwaninger, Hardmeier, and Hofer (2004, 2005) have identified three image based factors: View Difficulty, Superposition and Bag Complexity. This article consists of 4 experiments which lead to the development of a statistical model that is able to predict image difficulty based on these image based factors. Experiment 1 is a replication of earlier findings confirming the relevance of image based factors as defined by Schwaninger et al. (2005) on x-ray detection performance. In Experiment 2, we found significant correlations between human ratings of image based factors and human detection performance. In Experiment 3, we introduced our image measurements and found significant correlations between them and human detection performance. Moreover, significant correlations were found between our image measurements and corresponding human ratings, indicating high perceptual plausibility. In Experiment 4, it was shown using multiple linear regression analysis that our image measurements can predict human performance as well as human ratings can. Applications of a computational model for threat image projection systems and for adaptive computer-based training are discussed.
在本世纪初,航空安全的重要性急剧增加。最重要的任务之一是使用x光机对乘客的行李进行目视检查。在这项研究中,我们研究了基于图像的因素对人类在x射线图像中检测违禁物品的作用。Schwaninger, Hardmeier和Hofer(2004 - 2005)已经确定了三个基于图像的因素:视图难度,叠加和袋复杂度。本文由4个实验组成,这些实验导致了一个统计模型的发展,该模型能够基于这些基于图像的因素预测图像的难度。实验1重复了先前的研究结果,证实了Schwaninger等人(2005)定义的基于图像的因素与x射线检测性能的相关性。在实验2中,我们发现人类对基于图像因素的评分与人类检测性能之间存在显著相关性。在实验3中,我们介绍了我们的图像测量,并发现它们与人类检测性能之间存在显著的相关性。此外,在我们的图像测量和相应的人类评分之间发现了显著的相关性,表明高度的感知合理性。在实验4中,使用多元线性回归分析表明,我们的图像测量可以预测人类的表现,就像人类的评分一样。讨论了计算模型在威胁图像投影系统和自适应计算机训练中的应用。
{"title":"A statistical approach for image difficulty estimation in x-ray screening using image measurements","authors":"A. Schwaninger, S. Michel, A. Bolfing","doi":"10.1145/1272582.1272606","DOIUrl":"https://doi.org/10.1145/1272582.1272606","url":null,"abstract":"The relevance of aviation security has increased dramatically at the beginning of this century. One of the most important tasks is the visual inspection of passenger bags using x-ray machines. In this study, we investigated the role of image based factors on human detection of prohibited items in x-ray images. Schwaninger, Hardmeier, and Hofer (2004, 2005) have identified three image based factors: View Difficulty, Superposition and Bag Complexity. This article consists of 4 experiments which lead to the development of a statistical model that is able to predict image difficulty based on these image based factors. Experiment 1 is a replication of earlier findings confirming the relevance of image based factors as defined by Schwaninger et al. (2005) on x-ray detection performance. In Experiment 2, we found significant correlations between human ratings of image based factors and human detection performance. In Experiment 3, we introduced our image measurements and found significant correlations between them and human detection performance. Moreover, significant correlations were found between our image measurements and corresponding human ratings, indicating high perceptual plausibility. In Experiment 4, it was shown using multiple linear regression analysis that our image measurements can predict human performance as well as human ratings can. Applications of a computational model for threat image projection systems and for adaptive computer-based training are discussed.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116070098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Perceptual uniformity of contrast scaling in complex images 复杂图像中对比度缩放的感知均匀性
Akiko Yoshida, Grzegorz Krawczyk, K. Myszkowski, H. Seidel
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail permissions@acm.org. APGV 2007, Tübingen, Germany, July 26–27, 2007. © 2007 ACM 978-1-59593-670-7/07/0007 $5.00 Perceptual Uniformity of Contrast Scaling in Complex Images
允许制作部分或全部作品的数字或硬拷贝供个人或课堂使用,但不收取任何费用,前提是副本不是出于商业利益而制作或分发的,并且副本在第一页上带有本通知和完整的引用。本作品组件的版权归ACM以外的其他人所有,必须得到尊重。允许有信用的摘要。以其他方式复制、重新发布、在服务器上发布或重新分发到列表,需要事先获得特定许可和/或付费。请联系ACM公司权限部,传真+1(212)8669 -0481或发邮件至permissions@acm.org。APGV 2007,德国,宾根,2007年7月26-27日。©2007 ACM 978-1-59593-670-7/07/0007 $5.00复杂图像对比度缩放的感知均匀性
{"title":"Perceptual uniformity of contrast scaling in complex images","authors":"Akiko Yoshida, Grzegorz Krawczyk, K. Myszkowski, H. Seidel","doi":"10.1145/1272582.1272614","DOIUrl":"https://doi.org/10.1145/1272582.1272614","url":null,"abstract":"Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail permissions@acm.org. APGV 2007, Tübingen, Germany, July 26–27, 2007. © 2007 ACM 978-1-59593-670-7/07/0007 $5.00 Perceptual Uniformity of Contrast Scaling in Complex Images","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116714689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using 3D computer graphics for perception: the role of local and global information in face processing 使用三维计算机图形进行感知:局部和全局信息在人脸处理中的作用
A. Schwaninger, S. Schuhmacher, H. Bülthoff, C. Wallraven
Everyday life requires us to recognize faces under transient changes in pose, expression and lighting conditions. Despite this, humans are adept at recognizing familiar faces. In this study, we focused on determining the types of information human observers use to recognize faces across variations in viewpoint. Of specific interest was whether holistic information is used exclusively, or whether the local information contained in facial parts (featural or component information), as well as their spatial relationships (configural information) is also encoded. A rigorous study investigating this question has not previously been possible, as the generation of a suitable set of stimuli using standard image manipulation techniques was not feasible. A 3D database of faces that have been processed to extract morphable models (Blanz & Vetter, 1999) allows us to generate such stimuli efficiently and with a high degree of control over display parameters. Three experiments were conducted, modeled after the inter-extra-ortho experiments by Bülthoff & Edelman, 1992. The first experiment served as a baseline for the subsequent two experiments. Ten face-stimuli were presented from a frontal view and from a 45° side view. At test, they had to be recognized among ten distractor faces shown from different viewpoints. We found systematic effects of viewpoint, in that the recognition performance increased as the angle between the learned view and the tested view decreased. This finding is consistent with face processing models based on 2D-view interpolation. Experiments 2 and 3 were the same as Experiment 1 expect for the fact that in the testing phase, the faces were presented scrambled or blurred. Scrambling was used to isolate featural from configural information. Blurring was used to provide stimuli in which local featural information was reduced. The results demonstrated that human observers are capable of recognizing faces across different viewpoints on the sole basis of isolated featural information and of isolated configural information.
日常生活需要我们在姿势、表情和光照条件的短暂变化下识别人脸。尽管如此,人类还是善于识别熟悉的面孔。在这项研究中,我们专注于确定人类观察者在不同视角下识别人脸所使用的信息类型。特别感兴趣的是是否只使用整体信息,或者是否包含在面部部分的局部信息(特征或成分信息),以及它们的空间关系(配置信息)也被编码。一个严谨的研究调查这个问题以前是不可能的,因为使用标准的图像处理技术产生一组合适的刺激是不可行的。经过处理以提取可变形模型的面部3D数据库(Blanz & Vetter, 1999)使我们能够有效地产生这种刺激,并对显示参数进行高度控制。根据b lthoff & Edelman, 1992年的inter-extra-ortho实验,我们进行了三个实验。第一个实验作为随后两个实验的基线。10个面部刺激从正面视图和45°侧面视图呈现。在测试中,他们必须在从不同角度展示的10张干扰人脸中被识别出来。我们发现视点的系统效应,即随着学习的视点与被测视点之间的角度减小,识别性能也随之提高。这一发现与基于二维视图插值的人脸处理模型一致。实验2和3与实验1相同,除了在测试阶段,人脸被打乱或模糊呈现。采用置乱的方法将特征从配置信息中分离出来。模糊被用来提供局部特征信息被减少的刺激。结果表明,仅基于孤立的特征信息和孤立的结构信息,人类观察者就能够从不同的角度识别人脸。
{"title":"Using 3D computer graphics for perception: the role of local and global information in face processing","authors":"A. Schwaninger, S. Schuhmacher, H. Bülthoff, C. Wallraven","doi":"10.1145/1272582.1272586","DOIUrl":"https://doi.org/10.1145/1272582.1272586","url":null,"abstract":"Everyday life requires us to recognize faces under transient changes in pose, expression and lighting conditions. Despite this, humans are adept at recognizing familiar faces. In this study, we focused on determining the types of information human observers use to recognize faces across variations in viewpoint. Of specific interest was whether holistic information is used exclusively, or whether the local information contained in facial parts (featural or component information), as well as their spatial relationships (configural information) is also encoded. A rigorous study investigating this question has not previously been possible, as the generation of a suitable set of stimuli using standard image manipulation techniques was not feasible. A 3D database of faces that have been processed to extract morphable models (Blanz & Vetter, 1999) allows us to generate such stimuli efficiently and with a high degree of control over display parameters. Three experiments were conducted, modeled after the inter-extra-ortho experiments by Bülthoff & Edelman, 1992. The first experiment served as a baseline for the subsequent two experiments. Ten face-stimuli were presented from a frontal view and from a 45° side view. At test, they had to be recognized among ten distractor faces shown from different viewpoints. We found systematic effects of viewpoint, in that the recognition performance increased as the angle between the learned view and the tested view decreased. This finding is consistent with face processing models based on 2D-view interpolation. Experiments 2 and 3 were the same as Experiment 1 expect for the fact that in the testing phase, the faces were presented scrambled or blurred. Scrambling was used to isolate featural from configural information. Blurring was used to provide stimuli in which local featural information was reduced. The results demonstrated that human observers are capable of recognizing faces across different viewpoints on the sole basis of isolated featural information and of isolated configural information.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122416240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A roughness measure for 3D mesh visual masking 三维网格视觉掩蔽的粗糙度测量
G. Lavoué
3D models are subject to a wide variety of processing operations such as compression, simplification or watermarking, which introduce slight geometric modifications on the shape. The main issue is to maximize the compression/simplification ratio or the watermark strength while minimizing these visual degradations. However few algorithms exploit the human visual system to hide these degradations, while perceptual attributes could be quite relevant for this task. Particularly, the Masking Effect defines the fact that a signal can be masked by the presence of another signal with similar frequency or orientation. In this context we introduce the notion of roughness for a 3D mesh, as a local measure of geometric noise on the surface. Indeed, a textured (or rough) region is able to hide geometric distortions much better than a smooth one. Our measure is based on curvature analysis on local windows of the mesh and is independent of the resolution/connectivity of the object. An application to Visual Masking is presented and discussed.
3D模型受到各种各样的处理操作的影响,例如压缩、简化或水印,这些操作会对形状进行轻微的几何修改。主要问题是最大化压缩/简化比率或水印强度,同时最小化这些视觉退化。然而,很少有算法利用人类视觉系统来隐藏这些退化,而感知属性可能与此任务非常相关。特别是,掩蔽效应定义了一个信号可以被具有相似频率或方向的另一个信号的存在所掩盖的事实。在这种情况下,我们引入了三维网格粗糙度的概念,作为表面上几何噪声的局部度量。事实上,纹理(或粗糙)区域能够比光滑区域更好地隐藏几何扭曲。我们的测量是基于网格局部窗口的曲率分析,与物体的分辨率/连通性无关。提出并讨论了视觉掩蔽的应用。
{"title":"A roughness measure for 3D mesh visual masking","authors":"G. Lavoué","doi":"10.1145/1272582.1272593","DOIUrl":"https://doi.org/10.1145/1272582.1272593","url":null,"abstract":"3D models are subject to a wide variety of processing operations such as compression, simplification or watermarking, which introduce slight geometric modifications on the shape. The main issue is to maximize the compression/simplification ratio or the watermark strength while minimizing these visual degradations. However few algorithms exploit the human visual system to hide these degradations, while perceptual attributes could be quite relevant for this task. Particularly, the Masking Effect defines the fact that a signal can be masked by the presence of another signal with similar frequency or orientation. In this context we introduce the notion of roughness for a 3D mesh, as a local measure of geometric noise on the surface. Indeed, a textured (or rough) region is able to hide geometric distortions much better than a smooth one. Our measure is based on curvature analysis on local windows of the mesh and is independent of the resolution/connectivity of the object. An application to Visual Masking is presented and discussed.","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123526488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
期刊
Proceedings of the 4th symposium on Applied perception in graphics and visualization
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1