首页 > 最新文献

Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization最新文献

英文 中文
The influence of avatar (self and character) animations on distance estimation, object interaction and locomotion in immersive virtual environments 沉浸式虚拟环境中化身(自我和角色)动画对距离估计、物体交互和运动的影响
Erin A. McManus, Bobby Bodenheimer, S. Streuber, S. Rosa, H. Bülthoff, B. Mohler
Humans have been shown to perceive and perform actions differently in immersive virtual environments (VEs) as compared to the real world. Immersive VEs often lack the presence of virtual characters; users are rarely presented with a representation of their own body and have little to no experience with other human avatars/characters. However, virtual characters and avatars are more often being used in immersive VEs. In a two-phase experiment, we investigated the impact of seeing an animated character or a self-avatar in a head-mounted display VE on task performance. In particular, we examined performance on three different behavioral tasks in the VE. In a learning phase, participants either saw a character animation or an animation of a cone. In the task performance phase, we varied whether participants saw a co-located animated self-avatar. Participants performed a distance estimation, an object interaction and a stepping stone locomotion task within the VE. We find no impact of a character animation or a self-avatar on distance estimates. We find that both the animation and the self-avatar influenced task performance which involved interaction with elements in the environment; the object interaction and the stepping stone tasks. Overall the participants performed the tasks faster and more accurately when they either had a self-avatar or saw a character animation. The results suggest that including character animations or self-avatars before or during task execution is beneficial to performance on some common interaction tasks within the VE. Finally, we see that in all cases (even without seeing a character or self-avatar animation) participants learned to perform the tasks more quickly and/or more accurately over time.
与现实世界相比,人类在沉浸式虚拟环境(ve)中感知和执行动作的方式有所不同。沉浸式ve通常缺乏虚拟角色的存在;用户很少看到自己身体的代表,也很少有其他人类化身/角色的体验。然而,虚拟角色和化身更常用于沉浸式虚拟现实游戏。在两个阶段的实验中,我们调查了在头戴式显示器上看到动画角色或自我化身对任务表现的影响。特别地,我们在VE中检查了三种不同行为任务的表现。在学习阶段,参与者要么看一个人物动画,要么看一个圆锥体动画。在任务执行阶段,我们改变了参与者是否看到了位于同一位置的动画自我化身。参与者在VE内完成距离估计、物体交互和踏脚石移动任务。我们发现角色动画或自我化身对距离估计没有影响。我们发现动画和自我化身都会影响与环境元素交互的任务表现;对象交互和垫脚石任务。总的来说,当参与者看到自己的虚拟形象或看到角色动画时,他们完成任务的速度更快、更准确。结果表明,在任务执行之前或执行过程中加入角色动画或自我化身有利于VE中一些常见交互任务的性能。最后,我们看到,在所有情况下(即使没有看到角色或自我形象动画),随着时间的推移,参与者学会了更快、更准确地执行任务。
{"title":"The influence of avatar (self and character) animations on distance estimation, object interaction and locomotion in immersive virtual environments","authors":"Erin A. McManus, Bobby Bodenheimer, S. Streuber, S. Rosa, H. Bülthoff, B. Mohler","doi":"10.1145/2077451.2077458","DOIUrl":"https://doi.org/10.1145/2077451.2077458","url":null,"abstract":"Humans have been shown to perceive and perform actions differently in immersive virtual environments (VEs) as compared to the real world. Immersive VEs often lack the presence of virtual characters; users are rarely presented with a representation of their own body and have little to no experience with other human avatars/characters. However, virtual characters and avatars are more often being used in immersive VEs. In a two-phase experiment, we investigated the impact of seeing an animated character or a self-avatar in a head-mounted display VE on task performance. In particular, we examined performance on three different behavioral tasks in the VE. In a learning phase, participants either saw a character animation or an animation of a cone. In the task performance phase, we varied whether participants saw a co-located animated self-avatar. Participants performed a distance estimation, an object interaction and a stepping stone locomotion task within the VE. We find no impact of a character animation or a self-avatar on distance estimates. We find that both the animation and the self-avatar influenced task performance which involved interaction with elements in the environment; the object interaction and the stepping stone tasks. Overall the participants performed the tasks faster and more accurately when they either had a self-avatar or saw a character animation. The results suggest that including character animations or self-avatars before or during task execution is beneficial to performance on some common interaction tasks within the VE. Finally, we see that in all cases (even without seeing a character or self-avatar animation) participants learned to perform the tasks more quickly and/or more accurately over time.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"63 1","pages":"37-44"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91267277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 71
Perceptually-based compensation of light pollution in display systems 显示系统中基于感知的光污染补偿
J. Baar, Steven Poulakos, Wojciech Jarosz, D. Nowrouzezahrai, Rasmus Tamstorf, M. Gross
This paper addresses the problem of unintended light contributions due to physical properties of display systems. An example of such unintended contribution is crosstalk in stereoscopic 3D display systems, often referred to as ghosting. Ghosting results in a reduction of visual quality, and may lead to an uncomfortable viewing experience. The latter is due to conflicting (depth) edge cues, which can hinder the human visual system (HVS) proper fusion of stereo images (stereopsis). We propose an automatic, perceptually-based computational compensation framework, which formulates pollution elimination as a minimization problem. Our method aims to distribute the error introduced by the pollution in a perceptually optimal manner. As a consequence ghost edges are smoothed locally, resulting in a more comfortable stereo viewing experience. We show how to make the computation tractable by exploiting the structure of the resulting problem, and also propose a perceptually-based pollution prediction. We show that our general framework is applicable to other light pollution problems, such as descattering.
本文解决了由于显示系统的物理特性而导致的意外光贡献问题。这种意想不到的贡献的一个例子是立体3D显示系统中的串扰,通常被称为鬼影。重影会降低视觉质量,并可能导致不舒服的观看体验。后者是由于冲突的(深度)边缘线索,这可能会阻碍人类视觉系统(HVS)正确融合立体图像(立体视觉)。我们提出了一个自动的,基于感知的计算补偿框架,它将污染消除作为最小化问题。我们的方法旨在以感知最优的方式分配由污染引入的误差。因此,幽灵边缘被局部平滑,从而产生更舒适的立体视觉体验。我们展示了如何通过利用结果问题的结构使计算易于处理,并提出了一种基于感知的污染预测。我们表明,我们的一般框架适用于其他光污染问题,如散射。
{"title":"Perceptually-based compensation of light pollution in display systems","authors":"J. Baar, Steven Poulakos, Wojciech Jarosz, D. Nowrouzezahrai, Rasmus Tamstorf, M. Gross","doi":"10.1145/2077451.2077460","DOIUrl":"https://doi.org/10.1145/2077451.2077460","url":null,"abstract":"This paper addresses the problem of unintended light contributions due to physical properties of display systems. An example of such unintended contribution is crosstalk in stereoscopic 3D display systems, often referred to as ghosting. Ghosting results in a reduction of visual quality, and may lead to an uncomfortable viewing experience. The latter is due to conflicting (depth) edge cues, which can hinder the human visual system (HVS) proper fusion of stereo images (stereopsis). We propose an automatic, perceptually-based computational compensation framework, which formulates pollution elimination as a minimization problem. Our method aims to distribute the error introduced by the pollution in a perceptually optimal manner. As a consequence ghost edges are smoothed locally, resulting in a more comfortable stereo viewing experience. We show how to make the computation tractable by exploiting the structure of the resulting problem, and also propose a perceptually-based pollution prediction. We show that our general framework is applicable to other light pollution problems, such as descattering.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"4 1","pages":"45-52"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80527690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Gaze guidance reduces the number of vehicle-pedestrian collisions in a driving simulator 在驾驶模拟器中,凝视引导减少了车辆与行人碰撞的次数
L. Pomârjanschi, M. Dorr, E. Barth
Driving and visual perception are tightly linked. Every moment, a multitude of visual stimuli compete for the driver's limited attentional resources. Despite modern safety measures, traffic accidents still remain a major source of fatalities. A large part of these casualties occur in accidents for which driver distraction was cited as the main cause [National Highway Traffic Safety Administration September 2010]. We propose to help drivers by building an augmented vision system that can guide eye movements towards regions which may constitute a source of danger. In a first study, we have already shown that largely unobtrusive gaze guidance techniques used in a driving simulator help drivers better distribute their attentional resources and drive more safely [Pomarjanschi et al. 2011]. Current experiments investigate the efficiency of more general cues, that only signal the direction in which a critical event might occur. Results of these experiments will be reported at the conference.
驾驶和视觉感知是紧密相连的。每时每刻,大量的视觉刺激都在争夺驾驶员有限的注意力资源。尽管有现代安全措施,交通事故仍然是造成死亡的主要原因。这些伤亡中有很大一部分发生在事故中,司机分心被认为是主要原因[国家公路交通安全管理局,2010年9月]。我们建议通过建立一个增强视觉系统来帮助司机,该系统可以引导眼球运动到可能构成危险的区域。在第一项研究中,我们已经表明,在驾驶模拟器中使用的大部分不引人注目的凝视引导技术可以帮助驾驶员更好地分配他们的注意力资源,并更安全地驾驶[Pomarjanschi等人,2011]。目前的实验研究的是更一般的线索的效率,这些线索只表明关键事件可能发生的方向。这些实验的结果将在会议上报告。
{"title":"Gaze guidance reduces the number of vehicle-pedestrian collisions in a driving simulator","authors":"L. Pomârjanschi, M. Dorr, E. Barth","doi":"10.1145/2077451.2077482","DOIUrl":"https://doi.org/10.1145/2077451.2077482","url":null,"abstract":"Driving and visual perception are tightly linked. Every moment, a multitude of visual stimuli compete for the driver's limited attentional resources. Despite modern safety measures, traffic accidents still remain a major source of fatalities. A large part of these casualties occur in accidents for which driver distraction was cited as the main cause [National Highway Traffic Safety Administration September 2010]. We propose to help drivers by building an augmented vision system that can guide eye movements towards regions which may constitute a source of danger. In a first study, we have already shown that largely unobtrusive gaze guidance techniques used in a driving simulator help drivers better distribute their attentional resources and drive more safely [Pomarjanschi et al. 2011]. Current experiments investigate the efficiency of more general cues, that only signal the direction in which a critical event might occur. Results of these experiments will be reported at the conference.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"71 Suppl 1 1","pages":"119"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78419842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaze-contingent real-time video processing to study natural vision 基于注视的实时视频处理研究自然视觉
M. Dorr, P. Bex
Most of our knowledge about visual performance has been obtained with simple, synthetic stimuli, such as narrowband gratings presented on homogeneous backgrounds, and under steady fixation. The visual input we encounter in the real world, however, is fundamentally different and comprises a very broad distribution of spatio-temporal frequencies, orientations, colours, and contrasts, and eye movements induce strong temporal transients on the retina several times per second.
我们关于视觉表现的大部分知识都是通过简单的、合成的刺激获得的,比如在均匀背景下呈现的窄带光栅,以及在稳定的注视下。然而,我们在现实世界中遇到的视觉输入是完全不同的,它包含了非常广泛的时空频率、方向、颜色和对比度分布,而且眼球运动每秒会在视网膜上产生几次强烈的时间瞬变。
{"title":"Gaze-contingent real-time video processing to study natural vision","authors":"M. Dorr, P. Bex","doi":"10.1145/2077451.2077476","DOIUrl":"https://doi.org/10.1145/2077451.2077476","url":null,"abstract":"Most of our knowledge about visual performance has been obtained with simple, synthetic stimuli, such as narrowband gratings presented on homogeneous backgrounds, and under steady fixation. The visual input we encounter in the real world, however, is fundamentally different and comprises a very broad distribution of spatio-temporal frequencies, orientations, colours, and contrasts, and eye movements induce strong temporal transients on the retina several times per second.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"42 1","pages":"113"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76481723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaze-contingent enhancements for a visual search and rescue task 视觉搜索和救援任务的视觉增强
James Mardell, M. Witkowski, R. Spence
An important task in many fields is the human visual inspection of an image. Those fields include quality control, medical diagnosis, surveillance and Wilderness Search and Rescue (WiSAR) [Goodrich et al. 2008]. The latter activity, triggered by an individual becoming lost, is the context within which this work proposes and evaluates a new approach to the task of human visual inspection.
人类对图像的视觉检测是许多领域的一项重要任务。这些领域包括质量控制、医疗诊断、监视和野外搜索与救援(WiSAR) [Goodrich et . 2008]。后一种活动,由个体迷路触发,是本工作提出和评估人类视觉检查任务的新方法的背景。
{"title":"Gaze-contingent enhancements for a visual search and rescue task","authors":"James Mardell, M. Witkowski, R. Spence","doi":"10.1145/2077451.2077472","DOIUrl":"https://doi.org/10.1145/2077451.2077472","url":null,"abstract":"An important task in many fields is the human visual inspection of an image. Those fields include quality control, medical diagnosis, surveillance and Wilderness Search and Rescue (WiSAR) [Goodrich et al. 2008]. The latter activity, triggered by an individual becoming lost, is the context within which this work proposes and evaluates a new approach to the task of human visual inspection.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"1 1","pages":"109"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80639367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Perceiving human motion variety 感知人类动作的变化
M. Prazák, C. O'Sullivan
In order to simulate plausible groups or crowds of virtual characters, it is important to ensure that the individuals in a crowd do not look, move, behave or sound identical to each other. Such obvious 'cloning' can be disconcerting and reduce the engagement of the viewer with an animated movie, virtual environment or game. In this paper, we focus in particular on the problem of motion cloning, i. e., where the motion from one person is used to animate more than one virtual character model. Using our database of motions captured from 83 actors (45M and 38F), we present an experimental framework for evaluating human motion, which allows both the static (e.g., skeletal structure) and dynamic aspects (e.g., walking style) of an animation to be controlled. This framework enables the creation of crowd scenarios using captured human motions, thereby generating simulations similar to those found in commercial games and movies, while allowing full control over the parameters that affect the perceived variety of the individual motions in a crowd. We use the framework to perform an experiment on the perception of characteristic walking motions in a crowd, and conclude that the minimum number of individual motions needed for a crowd to look varied could be as low as three. While the focus of this paper was on the dynamic aspects of animation, our framework is general enough to be used to explore a much wider range of factors that affect the perception of characteristic human motion.
为了模拟一群虚拟角色,确保人群中的个体看起来、移动、行为或声音都不相同是很重要的。这种明显的“克隆”可能会令人不安,并降低观众对动画电影、虚拟环境或游戏的参与度。在本文中,我们特别关注运动克隆的问题,即一个人的运动被用来动画多个虚拟角色模型。利用我们从83个演员(45米和38英尺)捕获的运动数据库,我们提出了一个评估人体运动的实验框架,它允许控制动画的静态(例如骨骼结构)和动态方面(例如步行风格)。这个框架可以使用捕捉到的人类动作来创建人群场景,从而产生类似于商业游戏和电影中的模拟,同时允许完全控制影响人群中感知到的个体动作变化的参数。我们使用该框架对人群中特征行走动作的感知进行了实验,并得出结论,人群中看起来不同的个体动作的最小数量可能低至3个。虽然本文的重点是动画的动态方面,但我们的框架足够通用,可以用于探索影响人类特征运动感知的更广泛因素。
{"title":"Perceiving human motion variety","authors":"M. Prazák, C. O'Sullivan","doi":"10.1145/2077451.2077468","DOIUrl":"https://doi.org/10.1145/2077451.2077468","url":null,"abstract":"In order to simulate plausible groups or crowds of virtual characters, it is important to ensure that the individuals in a crowd do not look, move, behave or sound identical to each other. Such obvious 'cloning' can be disconcerting and reduce the engagement of the viewer with an animated movie, virtual environment or game. In this paper, we focus in particular on the problem of motion cloning, i. e., where the motion from one person is used to animate more than one virtual character model. Using our database of motions captured from 83 actors (45M and 38F), we present an experimental framework for evaluating human motion, which allows both the static (e.g., skeletal structure) and dynamic aspects (e.g., walking style) of an animation to be controlled. This framework enables the creation of crowd scenarios using captured human motions, thereby generating simulations similar to those found in commercial games and movies, while allowing full control over the parameters that affect the perceived variety of the individual motions in a crowd. We use the framework to perform an experiment on the perception of characteristic walking motions in a crowd, and conclude that the minimum number of individual motions needed for a crowd to look varied could be as low as three. While the focus of this paper was on the dynamic aspects of animation, our framework is general enough to be used to explore a much wider range of factors that affect the perception of characteristic human motion.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"35 1","pages":"87-92"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81002940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Egocentric distance perception in HMD-based virtual environments 基于hmd的虚拟环境中的自我中心距离感知
Qiufeng Lin, Xianshi Xie, Aysu Erdemir, G. Narasimham, T. McNamara, J. Rieser, Bobby Bodenheimer
We conducted a followup experiment to the work of Lin et al. [2011]. The experimental protocol was the same as that of Experiment Four in Lin et al. [2011] except the viewing condition was binocular instead of monocular. In that work there was no distance underestimation, as has been widely reported elsewhere, and we were motivated in this experiment to see if stereoscopic effects in head-mounted displays (HMDs) accounted for this effect.
我们对Lin等人[2011]的工作进行了后续实验。实验方案与Lin等[2011]的实验四相同,只是观察条件由单眼改为双目。在这项工作中,没有距离低估,正如在其他地方广泛报道的那样,我们在这个实验中受到激励,看看头戴式显示器(hmd)的立体效果是否能解释这种效应。
{"title":"Egocentric distance perception in HMD-based virtual environments","authors":"Qiufeng Lin, Xianshi Xie, Aysu Erdemir, G. Narasimham, T. McNamara, J. Rieser, Bobby Bodenheimer","doi":"10.1145/2077451.2077486","DOIUrl":"https://doi.org/10.1145/2077451.2077486","url":null,"abstract":"We conducted a followup experiment to the work of Lin et al. [2011]. The experimental protocol was the same as that of Experiment Four in Lin et al. [2011] except the viewing condition was binocular instead of monocular. In that work there was no distance underestimation, as has been widely reported elsewhere, and we were motivated in this experiment to see if stereoscopic effects in head-mounted displays (HMDs) accounted for this effect.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"20 1","pages":"123"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87794578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Differentiating aggregate gaze distributions 区分聚合注视分布
Thomas Grindinger, A. Duchowski, P. Orero
A machine learning approach used to classify aggregate gaze distributions recorded by an eye tracker and visualized as heatmaps is demonstrated to successfully discriminate between free and task-driven exploration of video clips.
一种机器学习方法用于对眼动仪记录的总凝视分布进行分类,并将其可视化为热图,该方法被证明可以成功区分视频剪辑的自由探索和任务驱动探索。
{"title":"Differentiating aggregate gaze distributions","authors":"Thomas Grindinger, A. Duchowski, P. Orero","doi":"10.1145/2077451.2077473","DOIUrl":"https://doi.org/10.1145/2077451.2077473","url":null,"abstract":"A machine learning approach used to classify aggregate gaze distributions recorded by an eye tracker and visualized as heatmaps is demonstrated to successfully discriminate between free and task-driven exploration of video clips.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"47 1","pages":"110"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85893352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of video artifact perception using event-related potentials 利用事件相关电位评价视频伪影感知
Lea Lindemann, S. Wenger, M. Magnor
When new computer graphics algorithms for image and video editing, rendering or compression are developed, the quality of the results has to be evaluated and compared. Since the produced media are usually to be presented to an audience it is important to predict image and video quality as it would be perceived by a human observer. This can be done by applying some image quality metric or by expensive and time consuming user studies. Typically, statistical image quality metrics do not correlate to quality perceived by a human observer. More sophisticated HVS-inspired algorithms often do not generalize to arbitrary images. A drawback of user studies is that perceived image or video quality is filtered by a decision process, which, in turn, may be influenced by the performed task and chosen quality scale. To get an objective view on (subjectively) perceived image quality, electroencephalography can be used. In this paper we show that artifacts appearing in videos elicit a measurable brain response which can be analyzed using the event-related potentials technique. Since electroencephalography itself requires an elaborate procedure, we aim to find a minimal setup to reduce time and participants needed to conduct a reliable study of image and video quality. As a first step we demonstrate that the reaction to a video with or without an artifact can be identified by an off-the-shelf support vector machine, which is trained on a set of previously recorded responses, with a reliability of up to 80% from a single recorded electroencephalogram.
当开发用于图像和视频编辑、渲染或压缩的新计算机图形算法时,必须对结果的质量进行评估和比较。由于制作的媒体通常要呈现给观众,因此预测图像和视频质量很重要,因为它会被人类观察者所感知。这可以通过应用一些图像质量度量或昂贵且耗时的用户研究来实现。通常,统计图像质量指标与人类观察者所感知的质量无关。更复杂的hvs算法通常不能推广到任意图像。用户研究的一个缺点是,感知到的图像或视频质量是由决策过程过滤的,而决策过程又可能受到执行任务和选择的质量尺度的影响。为了获得对(主观)感知图像质量的客观看法,可以使用脑电图。在本文中,我们表明,在视频中出现的伪影引起可测量的大脑反应,可以使用事件相关电位技术进行分析。由于脑电图本身需要一个复杂的程序,我们的目标是找到一个最小的设置,以减少时间和参与者需要进行可靠的图像和视频质量研究。作为第一步,我们证明了对有或没有伪影的视频的反应可以通过现成的支持向量机来识别,该支持向量机在一组先前记录的响应上进行训练,从单个记录的脑电图中获得高达80%的可靠性。
{"title":"Evaluation of video artifact perception using event-related potentials","authors":"Lea Lindemann, S. Wenger, M. Magnor","doi":"10.1145/2077451.2077461","DOIUrl":"https://doi.org/10.1145/2077451.2077461","url":null,"abstract":"When new computer graphics algorithms for image and video editing, rendering or compression are developed, the quality of the results has to be evaluated and compared. Since the produced media are usually to be presented to an audience it is important to predict image and video quality as it would be perceived by a human observer. This can be done by applying some image quality metric or by expensive and time consuming user studies. Typically, statistical image quality metrics do not correlate to quality perceived by a human observer. More sophisticated HVS-inspired algorithms often do not generalize to arbitrary images. A drawback of user studies is that perceived image or video quality is filtered by a decision process, which, in turn, may be influenced by the performed task and chosen quality scale. To get an objective view on (subjectively) perceived image quality, electroencephalography can be used. In this paper we show that artifacts appearing in videos elicit a measurable brain response which can be analyzed using the event-related potentials technique. Since electroencephalography itself requires an elaborate procedure, we aim to find a minimal setup to reduce time and participants needed to conduct a reliable study of image and video quality. As a first step we demonstrate that the reaction to a video with or without an artifact can be identified by an off-the-shelf support vector machine, which is trained on a set of previously recorded responses, with a reliability of up to 80% from a single recorded electroencephalogram.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"31 1","pages":"53-58"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89265840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Using eye-tracking to assess different image retargeting methods 利用眼动追踪评估不同的图像重定位方法
Susana Castillo, Tilke Judd, D. Gutierrez
Assessing media retargeting results is not a trivial issue. When resizing one image to a particular percentage of its original size, some content has to be removed, which may affect the image's original meaning and/or composition. We examine the impact of the retargeting process on human fixations, by gathering eye-tracking data for a representative benchmark of retargeted images. We compute their derived saliency maps as input to a set of computational image distance metrics. When analyzing the fixations, we found that even strong artifacts may go unnoticed for areas outside the original regions of interest. We also note that the most important alterations in semantics are due to content removal. Since using an eye tracker is not always a feasible option, we additionally show how an existing model of prediction of human fixations also works sufficiently well in a retargeting context.
评估媒体重定向的效果并不是一个微不足道的问题。当将一个图像的大小调整到其原始大小的特定百分比时,必须删除一些内容,这可能会影响图像的原始含义和/或构成。我们通过收集重定向图像的代表性基准的眼动追踪数据,研究了重定向过程对人类注视的影响。我们计算其衍生的显著性映射作为一组计算图像距离度量的输入。当分析固定时,我们发现即使是强大的人工制品也可能在原始兴趣区域之外的区域被忽视。我们还注意到,语义上最重要的变化是由于内容删除。由于使用眼动仪并不总是一个可行的选择,我们还展示了现有的预测人类注视的模型如何在重定向环境中也能很好地工作。
{"title":"Using eye-tracking to assess different image retargeting methods","authors":"Susana Castillo, Tilke Judd, D. Gutierrez","doi":"10.1145/2077451.2077453","DOIUrl":"https://doi.org/10.1145/2077451.2077453","url":null,"abstract":"Assessing media retargeting results is not a trivial issue. When resizing one image to a particular percentage of its original size, some content has to be removed, which may affect the image's original meaning and/or composition. We examine the impact of the retargeting process on human fixations, by gathering eye-tracking data for a representative benchmark of retargeted images. We compute their derived saliency maps as input to a set of computational image distance metrics. When analyzing the fixations, we found that even strong artifacts may go unnoticed for areas outside the original regions of interest. We also note that the most important alterations in semantics are due to content removal. Since using an eye tracker is not always a feasible option, we additionally show how an existing model of prediction of human fixations also works sufficiently well in a retargeting context.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"8 1","pages":"7-14"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90378814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
期刊
Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1