首页 > 最新文献

Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization最新文献

英文 中文
Perceptual issues in optical-see-through displays 光学透明显示器的感知问题
A. Huckauf, Mario H. Urbina, Jens Grubert, I. Böckelmann, Fabian Doil, L. Schega, Johannes Tümler, R. Mecke
Optical see-through devices enable observers to see additional information embedded in real environments. There is already some evidence of increasing visual load in respective systems. We investigated visual performance when users performed visual search tasks or dual tasks only on the optical see-through device, only on a computer screen, or switching between both. In spite of having controlled for basic differences between both devices, switching between the presentation devices produced costs in visual performance. The assumption that these decreases in performance are partly due to differences localizing the presented objects was confirmed by convergence data.
光学透视设备使观察者能够看到嵌入在真实环境中的附加信息。已经有一些证据表明,各个系统的视觉负荷都在增加。我们调查了用户仅在光学透明设备上、仅在计算机屏幕上或在两者之间切换时执行视觉搜索任务或双重任务的视觉表现。尽管控制了两种设备之间的基本差异,但在演示设备之间切换会产生视觉性能方面的成本。这些性能下降的部分原因是由于所呈现对象的定位差异,这一假设得到了收敛数据的证实。
{"title":"Perceptual issues in optical-see-through displays","authors":"A. Huckauf, Mario H. Urbina, Jens Grubert, I. Böckelmann, Fabian Doil, L. Schega, Johannes Tümler, R. Mecke","doi":"10.1145/1836248.1836255","DOIUrl":"https://doi.org/10.1145/1836248.1836255","url":null,"abstract":"Optical see-through devices enable observers to see additional information embedded in real environments. There is already some evidence of increasing visual load in respective systems. We investigated visual performance when users performed visual search tasks or dual tasks only on the optical see-through device, only on a computer screen, or switching between both. In spite of having controlled for basic differences between both devices, switching between the presentation devices produced costs in visual performance. The assumption that these decreases in performance are partly due to differences localizing the presented objects was confirmed by convergence data.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"8 1","pages":"41-48"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81176218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Saliency for animated meshes with material properties 具有材料属性的动画网格的显著性
A. Bulbul, Çetin Koca, T. Çapin, U. Güdükbay
We propose a technique to calculate the saliency of animated meshes with material properties. The saliency computation considers multiple features of 3D meshes including their geometry, material and motion. Each feature contributes to the final saliency map which is view independent; and therefore, can be used for view dependent and view independent applications. To verify our saliency calculations, we performed an experiment in which we use an eye tracker to compare the saliencies of the regions that the viewers look with the other regions of the models. The results confirm that our saliency computation gives promising results. We also present several applications in which the saliency information is used.
我们提出了一种计算具有材料属性的动画网格显著性的技术。显著性计算考虑了三维网格的几何、材料和运动等多种特征。每个特征对最终的显著性图都有贡献,该显著性图与视图无关;因此,可以用于视图依赖和视图独立的应用程序。为了验证我们的显著性计算,我们进行了一个实验,在这个实验中,我们使用眼动仪来比较观众看到的区域与模型的其他区域的显著性。结果证实我们的显著性计算得到了令人满意的结果。我们还介绍了使用显著性信息的几个应用程序。
{"title":"Saliency for animated meshes with material properties","authors":"A. Bulbul, Çetin Koca, T. Çapin, U. Güdükbay","doi":"10.1145/1836248.1836263","DOIUrl":"https://doi.org/10.1145/1836248.1836263","url":null,"abstract":"We propose a technique to calculate the saliency of animated meshes with material properties. The saliency computation considers multiple features of 3D meshes including their geometry, material and motion. Each feature contributes to the final saliency map which is view independent; and therefore, can be used for view dependent and view independent applications. To verify our saliency calculations, we performed an experiment in which we use an eye tracker to compare the saliencies of the regions that the viewers look with the other regions of the models. The results confirm that our saliency computation gives promising results. We also present several applications in which the saliency information is used.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"1 1","pages":"81-88"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83578807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
3D visualization of archaeological uncertainty 考古不确定性的三维可视化
Maria Sifniotis, Ben J. C. Jackson, K. Mania, N. Vlassis, P. Watten, M. White
By uncertainty, we define an archaeological expert's level of confidence in an interpretation deriving from gathered evidence. Archaeologists and computer scientists have urged caution in the use of 3D for archaeological reconstructions because the availability of other possible hypotheses is not always being acknowledged. This poster presents a 3D visualization system of archaeological uncertainty.
通过不确定性,我们定义了考古专家对从收集到的证据中得出的解释的信心程度。考古学家和计算机科学家敦促在使用3D进行考古重建时要谨慎,因为其他可能的假设并不总是得到承认。这张海报展示了一个考古不确定性的3D可视化系统。
{"title":"3D visualization of archaeological uncertainty","authors":"Maria Sifniotis, Ben J. C. Jackson, K. Mania, N. Vlassis, P. Watten, M. White","doi":"10.1145/1836248.1836284","DOIUrl":"https://doi.org/10.1145/1836248.1836284","url":null,"abstract":"By uncertainty, we define an archaeological expert's level of confidence in an interpretation deriving from gathered evidence. Archaeologists and computer scientists have urged caution in the use of 3D for archaeological reconstructions because the availability of other possible hypotheses is not always being acknowledged. This poster presents a 3D visualization system of archaeological uncertainty.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"28 1","pages":"162"},"PeriodicalIF":0.0,"publicationDate":"2010-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85746202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
How does a virtual peer influence children's distance from the roadway when initiating crossing? 虚拟同伴如何影响儿童在开始过马路时与马路的距离?
Timofey Grechkin, Sabarish V. Babu, Christine J. Ziemer, Benjamin Chihak, J. Cremer, J. Kearney, J. Plumert
A bike rider's distance from the roadway is one of the factors that determine the safety of the crossing. First, it dictates the vantage point from which the rider sees the oncoming traffic. Second, it governs the distance that must be crossed to clear the beam of oncoming traffic. This study investigated how the behavior of a virtual peer in an immersive bicycling simulator influences how far away from the roadway children are when they initiate crossing.
骑自行车的人与道路的距离是决定十字路口安全的因素之一。首先,它决定了骑手看到迎面而来的车辆的有利位置。其次,它规定了清除迎面而来的车辆光束必须跨越的距离。本研究调查了沉浸式自行车模拟器中虚拟同伴的行为如何影响儿童开始穿越时距离道路的距离。
{"title":"How does a virtual peer influence children's distance from the roadway when initiating crossing?","authors":"Timofey Grechkin, Sabarish V. Babu, Christine J. Ziemer, Benjamin Chihak, J. Cremer, J. Kearney, J. Plumert","doi":"10.1145/1620993.1621023","DOIUrl":"https://doi.org/10.1145/1620993.1621023","url":null,"abstract":"A bike rider's distance from the roadway is one of the factors that determine the safety of the crossing. First, it dictates the vantage point from which the rider sees the oncoming traffic. Second, it governs the distance that must be crossed to clear the beam of oncoming traffic. This study investigated how the behavior of a virtual peer in an immersive bicycling simulator influences how far away from the roadway children are when they initiate crossing.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"31 1","pages":"129"},"PeriodicalIF":0.0,"publicationDate":"2009-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78076404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human perception of quadruped motion 人类对四足运动的感知
Ljiljana Skrba, C. O'Sullivan
In our research we are interested in human sensitivity to differences in animal gaits. We use point light walkers as stimuli, and follow up with a study using a realistic 3D model. Previously it has been shown that humans can regonise human motion, gender and the identity of an actor from a set of moving points [1973; 1977]. McDonnell et al. [2008] show that both shape and motion influence sex perception of virtual human characters. Mather and West [1993] have shown that people can recognise animals from pointlight displays. In order to find out whether we can tell the difference between animals using motion cues, we captured the motion of farm animals.
在我们的研究中,我们感兴趣的是人类对动物步态差异的敏感性。我们使用点光步行者作为刺激,并使用逼真的3D模型进行后续研究。此前已有研究表明,人类可以从一组移动点中识别人类的动作、性别和演员的身份[1973;1977]。McDonnell等人[2008]表明,形状和动作都会影响虚拟人类角色的性感知。马瑟和韦斯特[1993]已经证明,人们可以通过点光显示来识别动物。为了找出我们是否可以通过动作线索分辨动物之间的区别,我们捕捉了农场动物的动作。
{"title":"Human perception of quadruped motion","authors":"Ljiljana Skrba, C. O'Sullivan","doi":"10.1145/1620993.1621024","DOIUrl":"https://doi.org/10.1145/1620993.1621024","url":null,"abstract":"In our research we are interested in human sensitivity to differences in animal gaits. We use point light walkers as stimuli, and follow up with a study using a realistic 3D model. Previously it has been shown that humans can regonise human motion, gender and the identity of an actor from a set of moving points [1973; 1977]. McDonnell et al. [2008] show that both shape and motion influence sex perception of virtual human characters. Mather and West [1993] have shown that people can recognise animals from pointlight displays. In order to find out whether we can tell the difference between animals using motion cues, we captured the motion of farm animals.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"34 1","pages":"130"},"PeriodicalIF":0.0,"publicationDate":"2009-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87497138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Display considerations for night and low-illumination viewing 夜间和低照度观看的显示考虑因素
Rafał K. Mantiuk, Allan G. Rempel, W. Heidrich
An inadequately designed display viewed in the dark can easily cause dazzling glare and affect our night vision. In this paper we test a display design in which the spectral light emission is selected to reduce the impact of the display on night vision performance while at the same time ensuring good display legibility. We use long-wavelength light (red) that is easily visible to daylight vision photoreceptors (cones) but almost invisible to night vision photoreceptors (rods). We verify rod-cone separation in a psychophysical experiment, in which we measure contrast detection in the presence of a colored source of glare. In a separate user study we measure the range of display brightness settings that provide good legibility and are not distracting under low ambient lighting. Our results can serve as a guidelines for designing the displays that change their color scheme at low ambient light levels.
在黑暗中观看设计不充分的显示器很容易造成刺眼的眩光,影响我们的夜视能力。在本文中,我们测试了一种显示设计,其中选择光谱光发射以减少显示对夜视性能的影响,同时保证良好的显示可读性。我们使用长波长的光(红色),它很容易被日光视觉感受器(视锥细胞)看到,但几乎不被夜视感受器(视杆细胞)看到。我们在一个心理物理实验中验证了杆状锥体分离,在这个实验中,我们测量了在彩色眩光源存在下的对比度检测。在一项单独的用户研究中,我们测量了显示亮度设置的范围,以提供良好的易读性,并且在低环境照明下不会分散注意力。我们的结果可以作为设计在低环境光水平下改变其配色方案的显示器的指导方针。
{"title":"Display considerations for night and low-illumination viewing","authors":"Rafał K. Mantiuk, Allan G. Rempel, W. Heidrich","doi":"10.1145/1620993.1621005","DOIUrl":"https://doi.org/10.1145/1620993.1621005","url":null,"abstract":"An inadequately designed display viewed in the dark can easily cause dazzling glare and affect our night vision. In this paper we test a display design in which the spectral light emission is selected to reduce the impact of the display on night vision performance while at the same time ensuring good display legibility. We use long-wavelength light (red) that is easily visible to daylight vision photoreceptors (cones) but almost invisible to night vision photoreceptors (rods). We verify rod-cone separation in a psychophysical experiment, in which we measure contrast detection in the presence of a colored source of glare. In a separate user study we measure the range of display brightness settings that provide good legibility and are not distracting under low ambient lighting. Our results can serve as a guidelines for designing the displays that change their color scheme at low ambient light levels.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"20 1","pages":"53-58"},"PeriodicalIF":0.0,"publicationDate":"2009-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80588883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Effects of animation, user-controlled interactions, and multiple static views in understanding 3D structures 动画效果,用户控制的交互,以及理解3D结构的多个静态视图
Taylor Sando, Melanie Tory, Pourang Irani
Visualizations of 3D spatial structures use various techniques such as user controlled interactions or 2D projection views to convey the structure to users. Researchers have shown that motion cues can help assimilate the structure of 3D spatial data, particularly for discerning occluded parts of the objects. However, motion cues or smooth animations also have costs - they increase the viewing time. What remains unclear is whether any one particular viewing time. What remains unclear is whether any one particular viewing modality allows users to understand and operate on the 3D structure as effectively as a combination of 2D and 3D static views. To assess the effectiveness of understanding 3D structures, we carried out three experiments. In all three experiments we evaluated the effectiveness of perceiving 3D structures with either self controlled interactions, animated transitions, and 2D+3D static views. In the first experiment, subjects were given a task to estimate the relative distances of objects in a 3D scene. In the second experiment, subjects made judgements to discern and identify the existence of differences between 3D objects. In the third experiment, participants were required to reconstruct a 3D spatial structure based on the 3D models presented to them. Results of the three experiments reveal that participants were more accurate and performed the spatial tasks faster with smooth animations and self-controlled interactions than with 2D+3D static views. Our results overall suggest that the costs involved in interacting or animating a 3D spatial structure are significantly outweighed by the perceptual benefits derived from viewing and interacting in these modes of presentation.
三维空间结构的可视化使用各种技术,如用户控制的交互或2D投影视图来向用户传达结构。研究人员已经证明,运动线索可以帮助吸收3D空间数据的结构,特别是在识别物体的遮挡部分时。然而,动作线索或平滑动画也有成本——它们增加了观看时间。目前尚不清楚是否有任何一个特定的观看时间。目前尚不清楚的是,是否有任何一种特定的观看方式能让用户像2D和3D静态视图的组合一样有效地理解和操作3D结构。为了评估理解三维结构的有效性,我们进行了三个实验。在所有三个实验中,我们评估了感知3D结构的有效性,包括自我控制的交互、动画转换和2D+3D静态视图。在第一个实验中,受试者被要求估计3D场景中物体的相对距离。在第二个实验中,受试者通过判断来辨别和识别三维物体之间是否存在差异。在第三个实验中,参与者被要求根据呈现给他们的三维模型重建一个三维空间结构。三个实验结果表明,在流畅的动画和自我控制的互动下,参与者比在2D+3D静态视图下更准确、更快地完成空间任务。我们的研究结果总体上表明,在这些呈现模式下观看和互动所带来的感知收益大大超过了互动或动画3D空间结构所涉及的成本。
{"title":"Effects of animation, user-controlled interactions, and multiple static views in understanding 3D structures","authors":"Taylor Sando, Melanie Tory, Pourang Irani","doi":"10.1145/1620993.1621008","DOIUrl":"https://doi.org/10.1145/1620993.1621008","url":null,"abstract":"Visualizations of 3D spatial structures use various techniques such as user controlled interactions or 2D projection views to convey the structure to users. Researchers have shown that motion cues can help assimilate the structure of 3D spatial data, particularly for discerning occluded parts of the objects. However, motion cues or smooth animations also have costs - they increase the viewing time. What remains unclear is whether any one particular viewing time. What remains unclear is whether any one particular viewing modality allows users to understand and operate on the 3D structure as effectively as a combination of 2D and 3D static views. To assess the effectiveness of understanding 3D structures, we carried out three experiments. In all three experiments we evaluated the effectiveness of perceiving 3D structures with either self controlled interactions, animated transitions, and 2D+3D static views. In the first experiment, subjects were given a task to estimate the relative distances of objects in a 3D scene. In the second experiment, subjects made judgements to discern and identify the existence of differences between 3D objects. In the third experiment, participants were required to reconstruct a 3D spatial structure based on the 3D models presented to them. Results of the three experiments reveal that participants were more accurate and performed the spatial tasks faster with smooth animations and self-controlled interactions than with 2D+3D static views. Our results overall suggest that the costs involved in interacting or animating a 3D spatial structure are significantly outweighed by the perceptual benefits derived from viewing and interacting in these modes of presentation.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"44 1","pages":"69-76"},"PeriodicalIF":0.0,"publicationDate":"2009-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81571955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Depth judgment measures and occluders in near-field augmented reality 近场增强现实中的深度判断措施和遮挡物
Gurjot Singh, J. Swan, J. A. Jones, Lorraine Lin, S. Ellis
This poster describes a tabletop-based experiment which studied two complimentary depth judgment protocols and the effect of an occluding surface on depth judgments in augmented reality (AR). The experimental setup (Figure 1) broadly replicated the setup described by Ellis and Menges [1998], and studied near-field distances between 30 and 60 centimeters. We collected data from six participants; we consider this to be a pilot study. These distances are important for many AR applications that involve reaching and manipulating; examples include AR-assisted surgery and medical training devices, maintenance tasks, and table-top meetings where the participants are jointly interacting and manipulating shared virtual objects in the middle of the table. Some of these tasks involve "x-ray vision", where AR users perceive objects which are located behind solid, opaque surfaces. Ellis and Menges [1998] studied tabletop distances using a setup similar to Figure 1. They used a closed-loop perceptual matching task to examine near-field distances of 0.4 to 1.0 meters, and studied the effects of an occluding surface (the x-ray vision condition), convergence, accommodation, observer age, and monocular, biocular, and stereo AR displays. They found that monocular viewing degraded the depth judgment, and that the x-ray vision condition caused a change in vergence angle which resulted in depth judgments being biased towards the observer. They also found that cutting a hole in the occluding surface, which made the depth of the virtual object physically plausible, reduced the depth judgment bias. The experimental setup (Figure 1) involved a height-adjustable tabletop that allowed observers to easily reach both above and below the table. We used two complimentary dependent measures to assess depth judgments: we replicated the closed-loop matching task (Task = closed) of Ellis and Menges [1998]; observers manipulated a small light to match the depth of the bottom of a slowly rotating, upside-down pyramid (the target object). In addition, we used an open-loop blind reaching task (Task = open), in order to compare the closed-loop task to a more perceptually-motivated depth judgment. Our occluding surface was composed of circular foam-core covered with a highly-salient checkerboard pattern; when observers saw the occluder (Occluder = present, otherwise Occluder = absent) it was presented 10 cm in front of the target. We used a factorial, within-subjects experimental design; observers made binocular stereo depth judgments. Figure 2 shows the results by task, occluder, and distance; the results are grouped by task for clarity, and should be judged relative to the 45° veridical lines. Figure 3 shows the results by task and occluder, expressed as normalized error = judged distance / veridical distance. All conditions underestimated the veridical distance of 100% to some degree. The closed-loop task replicated the finding of Ellis and Menges [1998]: the presence of the occlud
这张海报描述了一个基于桌面的实验,该实验研究了增强现实(AR)中两种互补的深度判断协议以及遮挡面对深度判断的影响。实验设置(图1)大致复制了Ellis和Menges[1998]所描述的设置,并研究了30至60厘米之间的近场距离。我们收集了6位参与者的数据;我们认为这是一项初步研究。这些距离对于许多涉及到达和操作的AR应用非常重要;例子包括ar辅助手术和医疗培训设备、维护任务,以及参与者在桌子中间共同交互和操作共享虚拟对象的桌面会议。其中一些任务涉及“x射线视觉”,增强现实用户可以感知位于固体、不透明表面后面的物体。Ellis和Menges[1998]使用类似于图1的设置研究桌面距离。他们使用闭环感知匹配任务来检查0.4至1.0米的近场距离,并研究了遮挡面(x射线视觉条件)、汇聚、调节、观察者年龄以及单眼、双目和立体AR显示的影响。他们发现单目观看会降低深度判断,而x射线视觉状况会导致会聚角的变化,从而导致深度判断偏向观察者。他们还发现,在遮挡表面切割一个洞,使虚拟物体的深度在物理上看起来合理,减少了深度判断的偏差。实验设置(图1)包括一个高度可调节的桌面,允许观察者轻松地到达桌子的上方和下方。我们使用了两种互补的依赖度量来评估深度判断:我们复制了Ellis和Menges[1998]的闭环匹配任务(task = closed);观察者操纵一盏小灯来匹配一个缓慢旋转的、倒置的金字塔(目标物体)底部的深度。此外,我们使用了一个开环盲到达任务(task = open),以便将闭环任务与更具感知动机的深度判断进行比较。我们的咬合表面由覆盖着高度突出的棋盘图案的圆形泡沫芯组成;当观察者看到遮挡物(遮挡物=存在,否则遮挡物=不存在)时,遮挡物出现在目标前方10cm处。我们采用了因子、受试者内实验设计;观察者进行双目立体深度判断。图2显示了任务、遮挡器和距离的结果;为了清晰起见,结果按任务分组,并应相对于45°垂直线进行判断。图3显示了任务和遮挡器的结果,表示为归一化误差=判断距离/验证距离。所有条件都在一定程度上低估了100%的垂直距离。闭环任务重复了Ellis和Menges[1998]的发现:遮挡物的存在会使观察者对深度的判断产生偏差。基于感知的开环任务导致更大的低估;考虑到开环任务中可用的深度线索较少,较大的误差不足为奇。有趣的是,在开环条件下,当遮挡物存在时,观察者判断目标更远。我们认为这是一项初步研究;我们计划从更多的参与者中收集数据,并改进实验设置和设计。
{"title":"Depth judgment measures and occluders in near-field augmented reality","authors":"Gurjot Singh, J. Swan, J. A. Jones, Lorraine Lin, S. Ellis","doi":"10.1145/1620993.1621021","DOIUrl":"https://doi.org/10.1145/1620993.1621021","url":null,"abstract":"This poster describes a tabletop-based experiment which studied two complimentary depth judgment protocols and the effect of an occluding surface on depth judgments in augmented reality (AR). The experimental setup (Figure 1) broadly replicated the setup described by Ellis and Menges [1998], and studied near-field distances between 30 and 60 centimeters. We collected data from six participants; we consider this to be a pilot study.\u0000 These distances are important for many AR applications that involve reaching and manipulating; examples include AR-assisted surgery and medical training devices, maintenance tasks, and table-top meetings where the participants are jointly interacting and manipulating shared virtual objects in the middle of the table. Some of these tasks involve \"x-ray vision\", where AR users perceive objects which are located behind solid, opaque surfaces.\u0000 Ellis and Menges [1998] studied tabletop distances using a setup similar to Figure 1. They used a closed-loop perceptual matching task to examine near-field distances of 0.4 to 1.0 meters, and studied the effects of an occluding surface (the x-ray vision condition), convergence, accommodation, observer age, and monocular, biocular, and stereo AR displays. They found that monocular viewing degraded the depth judgment, and that the x-ray vision condition caused a change in vergence angle which resulted in depth judgments being biased towards the observer. They also found that cutting a hole in the occluding surface, which made the depth of the virtual object physically plausible, reduced the depth judgment bias.\u0000 The experimental setup (Figure 1) involved a height-adjustable tabletop that allowed observers to easily reach both above and below the table. We used two complimentary dependent measures to assess depth judgments: we replicated the closed-loop matching task (Task = closed) of Ellis and Menges [1998]; observers manipulated a small light to match the depth of the bottom of a slowly rotating, upside-down pyramid (the target object). In addition, we used an open-loop blind reaching task (Task = open), in order to compare the closed-loop task to a more perceptually-motivated depth judgment. Our occluding surface was composed of circular foam-core covered with a highly-salient checkerboard pattern; when observers saw the occluder (Occluder = present, otherwise Occluder = absent) it was presented 10 cm in front of the target. We used a factorial, within-subjects experimental design; observers made binocular stereo depth judgments.\u0000 Figure 2 shows the results by task, occluder, and distance; the results are grouped by task for clarity, and should be judged relative to the 45° veridical lines. Figure 3 shows the results by task and occluder, expressed as normalized error = judged distance / veridical distance. All conditions underestimated the veridical distance of 100% to some degree. The closed-loop task replicated the finding of Ellis and Menges [1998]: the presence of the occlud","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"6 1","pages":"127"},"PeriodicalIF":0.0,"publicationDate":"2009-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84868713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Saliency maps of high dynamic range images 高动态范围图像的显著性图
J. Petit, R. Brémond, Jean-Philippe Tarel
A number of computational models of visual attention have been proposed based on the concept of saliency map, most of them validated using oculometric data. They are widely used for Computer Graphics applications with Low Dynamic Range images, mainly for image rendering, in order to avoid spending too much computing time on non salient areas. However, these algorithms were not used so far with High Dynamic Range (HDR) inputs. In this paper, we show that in the case of HDR images, the predictions using algorithms based on [Itti and Koch 2000] are less accurate than with 8-bit images. To improve the saliency computation for HDR inputs, we propose a new algorithm derived from [Itti and Koch 2000]. From an eye tracking experiment with a HDR scene, we show that this algorithm leads to good results for the saliency map computation, with a better fit between the saliency map and the ocular fixation map than Itti's algorithm.
基于显著性图的概念,人们提出了许多视觉注意的计算模型,其中大多数模型都使用眼视数据进行验证。它们广泛用于具有低动态范围图像的计算机图形应用程序,主要用于图像渲染,以避免在非显著区域上花费过多的计算时间。然而,到目前为止,这些算法还没有用于高动态范围(HDR)输入。在本文中,我们表明,在HDR图像的情况下,使用基于[Itti和Koch 2000]的算法的预测不如8位图像准确。为了改进HDR输入的显著性计算,我们提出了一种源自[Itti and Koch 2000]的新算法。通过HDR场景的眼动追踪实验表明,该算法在显著性图计算上取得了较好的结果,显著性图与眼注视图的拟合优于Itti算法。
{"title":"Saliency maps of high dynamic range images","authors":"J. Petit, R. Brémond, Jean-Philippe Tarel","doi":"10.1145/1620993.1621028","DOIUrl":"https://doi.org/10.1145/1620993.1621028","url":null,"abstract":"A number of computational models of visual attention have been proposed based on the concept of saliency map, most of them validated using oculometric data. They are widely used for Computer Graphics applications with Low Dynamic Range images, mainly for image rendering, in order to avoid spending too much computing time on non salient areas. However, these algorithms were not used so far with High Dynamic Range (HDR) inputs. In this paper, we show that in the case of HDR images, the predictions using algorithms based on [Itti and Koch 2000] are less accurate than with 8-bit images. To improve the saliency computation for HDR inputs, we propose a new algorithm derived from [Itti and Koch 2000]. From an eye tracking experiment with a HDR scene, we show that this algorithm leads to good results for the saliency map computation, with a better fit between the saliency map and the ocular fixation map than Itti's algorithm.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"18 1","pages":"118-130"},"PeriodicalIF":0.0,"publicationDate":"2009-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90633161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
The interaction between motion and form in expression recognition 表情识别中动作与形态的相互作用
D. Cunningham, C. Wallraven
Faces are a powerful and versatile communication channel. Physically, facial expressions contain a considerable amount of information, yet it is clear from stylized representations such as cartoons that not all of this information needs to be present for efficient processing of communicative intent. Here, we use a high-fidelity facial animation system to investigate the importance of two forms of spatial information (connectivity and the number of vertices) for the perception of intensity and the recognition of facial expressions. The simplest form of connectivity is point light faces. Since they show only the vertices, the motion and configuration of features can be seen but the higher-frequency spatial deformations cannot. In wireframe faces, additional information about spatial configuration and deformation is available. Finally, full-surface faces have the highest degree of static information. The results of two experiments are presented. In the first, the presence of motion was manipulated. In the second, the size of the images was varied. Overall, dynamic expressions performed better than static expressions and were largely impervious to the elimination of shape or connectivity information. Decreasing the size of the image had little effect until a critical size was reached. These results add to a growing body of evidence that shows the critical importance of dynamic information for processing of facial expressions: As long as motion information is present, very little spatial information is required.
面孔是一种强大而通用的沟通渠道。从物理上讲,面部表情包含了相当多的信息,但从像漫画这样的程式化表示中可以清楚地看出,并非所有这些信息都需要呈现才能有效地处理交流意图。在这里,我们使用高保真面部动画系统来研究两种形式的空间信息(连通性和顶点数量)对于感知强度和识别面部表情的重要性。最简单的连接形式是点光面。由于它们只显示顶点,因此可以看到特征的运动和配置,但无法看到高频空间变形。在线框面中,关于空间配置和变形的附加信息是可用的。最后,全曲面具有最高程度的静态信息。给出了两个实验的结果。首先,运动的存在是被操纵的。在第二个实验中,图像的大小是不同的。总的来说,动态表达式比静态表达式表现得更好,并且在很大程度上不受形状或连接信息消除的影响。在达到临界尺寸之前,减小图像的尺寸几乎没有效果。这些结果增加了越来越多的证据,表明动态信息对面部表情的处理至关重要:只要运动信息存在,就很少需要空间信息。
{"title":"The interaction between motion and form in expression recognition","authors":"D. Cunningham, C. Wallraven","doi":"10.1145/1620993.1621002","DOIUrl":"https://doi.org/10.1145/1620993.1621002","url":null,"abstract":"Faces are a powerful and versatile communication channel. Physically, facial expressions contain a considerable amount of information, yet it is clear from stylized representations such as cartoons that not all of this information needs to be present for efficient processing of communicative intent. Here, we use a high-fidelity facial animation system to investigate the importance of two forms of spatial information (connectivity and the number of vertices) for the perception of intensity and the recognition of facial expressions. The simplest form of connectivity is point light faces. Since they show only the vertices, the motion and configuration of features can be seen but the higher-frequency spatial deformations cannot. In wireframe faces, additional information about spatial configuration and deformation is available. Finally, full-surface faces have the highest degree of static information. The results of two experiments are presented. In the first, the presence of motion was manipulated. In the second, the size of the images was varied. Overall, dynamic expressions performed better than static expressions and were largely impervious to the elimination of shape or connectivity information. Decreasing the size of the image had little effect until a critical size was reached. These results add to a growing body of evidence that shows the critical importance of dynamic information for processing of facial expressions: As long as motion information is present, very little spatial information is required.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"10 1","pages":"41-44"},"PeriodicalIF":0.0,"publicationDate":"2009-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75788207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
期刊
Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1