首页 > 最新文献

PETMEI '11最新文献

英文 中文
Discrimination of gaze directions using low-level eye image features 基于低水平眼图像特征的凝视方向识别
Pub Date : 2011-09-18 DOI: 10.1145/2029956.2029961
Yanxia Zhang, A. Bulling, Hans-Werner Gellersen
In mobile daily life settings, video-based gaze tracking faces challenges associated with changes in lighting conditions and artefacts in the video images caused by head and body movements. These challenges call for the development of new methods that are robust to such influences. In this paper we investigate the problem of gaze estimation, more specifically how to discriminate different gaze directions from eye images. In a 17 participant user study we record eye images for 13 different gaze directions from a standard webcam. We extract a total of 50 features from these images that encode information on color, intensity and orientations. Using mRMR feature selection and a k-nearest neighbor (kNN) classifier we show that we can estimate these gaze directions with a mean recognition performance of 86%.
在移动的日常生活环境中,基于视频的凝视跟踪面临着与照明条件变化和头部和身体运动引起的视频图像中的伪影相关的挑战。面对这些挑战,需要开发能够抵御这些影响的新方法。在本文中,我们研究了注视估计问题,更具体地说,是如何从眼睛图像中区分不同的注视方向。在一项17名参与者的用户研究中,我们记录了来自标准网络摄像头的13种不同凝视方向的眼睛图像。我们从这些图像中提取了总共50个特征,这些特征编码了颜色、强度和方向信息。使用mRMR特征选择和k-最近邻(kNN)分类器,我们可以估计这些凝视方向,平均识别性能为86%。
{"title":"Discrimination of gaze directions using low-level eye image features","authors":"Yanxia Zhang, A. Bulling, Hans-Werner Gellersen","doi":"10.1145/2029956.2029961","DOIUrl":"https://doi.org/10.1145/2029956.2029961","url":null,"abstract":"In mobile daily life settings, video-based gaze tracking faces challenges associated with changes in lighting conditions and artefacts in the video images caused by head and body movements. These challenges call for the development of new methods that are robust to such influences. In this paper we investigate the problem of gaze estimation, more specifically how to discriminate different gaze directions from eye images. In a 17 participant user study we record eye images for 13 different gaze directions from a standard webcam. We extract a total of 50 features from these images that encode information on color, intensity and orientations. Using mRMR feature selection and a k-nearest neighbor (kNN) classifier we show that we can estimate these gaze directions with a mean recognition performance of 86%.","PeriodicalId":405392,"journal":{"name":"PETMEI '11","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125187237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Human factor affects eye movement pattern during riding motorcycle on the mountain 人为因素影响着登山摩托车的眼动模式
Pub Date : 2011-09-18 DOI: 10.1145/2029956.2029965
Haiwei Dong, Zhiwei Luo
Human's eyes are of great importance in the process of perception, cognition, movement, etc. as about 80% of information about the surrounding world comes from vision. Through analyzing the pattern of eye movement, we can make it clear how human accomplish everyday life with eyes. As human lives in the communities which are artificial environments, various man-made signs, objects and surrounding people have influence on human, particularly on eye movement pattern. To fully understand the eye movement pattern, we have to consider the human factors. This paper focuses on clarifying eye movement pattern during riding motorcycle on the mountain. We use a mobile eye mark tracking system to record the eye motion and the front view. By referring the recorded movie, eye mark analysis and fixation point analysis verify the influence from human factor. In addition, we provide suggestions to promote safe riding.
人类的眼睛在感知、认知、运动等过程中起着非常重要的作用,大约80%的周围世界的信息来自视觉。通过对眼球运动模式的分析,我们可以了解人类是如何用眼睛完成日常生活的。由于人类生活在人工环境的社区中,各种人为的标志、物体和周围的人对人类产生了影响,尤其是对眼球运动模式的影响。为了充分理解眼球运动模式,我们必须考虑人为因素。本文重点研究了山地摩托车骑行时的眼动模式。我们使用移动眼标跟踪系统来记录眼球运动和前视图。通过参考录像、眼痕分析和注视点分析,验证了人为因素的影响。此外,我们提供建议,以促进安全骑行。
{"title":"Human factor affects eye movement pattern during riding motorcycle on the mountain","authors":"Haiwei Dong, Zhiwei Luo","doi":"10.1145/2029956.2029965","DOIUrl":"https://doi.org/10.1145/2029956.2029965","url":null,"abstract":"Human's eyes are of great importance in the process of perception, cognition, movement, etc. as about 80% of information about the surrounding world comes from vision. Through analyzing the pattern of eye movement, we can make it clear how human accomplish everyday life with eyes. As human lives in the communities which are artificial environments, various man-made signs, objects and surrounding people have influence on human, particularly on eye movement pattern. To fully understand the eye movement pattern, we have to consider the human factors. This paper focuses on clarifying eye movement pattern during riding motorcycle on the mountain. We use a mobile eye mark tracking system to record the eye motion and the front view. By referring the recorded movie, eye mark analysis and fixation point analysis verify the influence from human factor. In addition, we provide suggestions to promote safe riding.","PeriodicalId":405392,"journal":{"name":"PETMEI '11","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126913893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementing gaze control for peripheral devices 实现外围设备的凝视控制
Pub Date : 2011-09-18 DOI: 10.1145/2029956.2029960
Jurek Breuninger, C. Lange, K. Bengler
The goal of the project "Gaze Controlled Interaction with Peripheral Devices" was to extend the capability of the head based eye tracking system DIKABLIS to detect the gaze allocation to previously defined Areas of Interest (AOI) in real time. This allows initiating various events or commands when a test person is wearing the head unit and the gaze is detected in an AOI. The commands can be used for interaction with different devices. Thus the tool for monitoring and analyzing gaze behavior becomes an interaction medium. With such a gaze control multi-modal interaction concepts could be realized. The projects primary aim was to give people with tetraplegia a mean of controlling devices in their home. The experimental set-up was a TV set that can be controlled by gaze.
“与周边设备的凝视控制交互”项目的目标是扩展基于头部的眼动追踪系统DIKABLIS的能力,以实时检测先前定义的兴趣区域(AOI)的凝视分配。这允许在测试人员戴着头单元并且在AOI中检测到凝视时启动各种事件或命令。这些命令可用于与不同的设备进行交互。因此,监视和分析凝视行为的工具成为一种交互媒介。通过这种注视控制,可以实现多模态交互概念。该项目的主要目的是为四肢瘫痪患者提供一种在家中控制设备的手段。实验装置是一台可以通过凝视来控制的电视机。
{"title":"Implementing gaze control for peripheral devices","authors":"Jurek Breuninger, C. Lange, K. Bengler","doi":"10.1145/2029956.2029960","DOIUrl":"https://doi.org/10.1145/2029956.2029960","url":null,"abstract":"The goal of the project \"Gaze Controlled Interaction with Peripheral Devices\" was to extend the capability of the head based eye tracking system DIKABLIS to detect the gaze allocation to previously defined Areas of Interest (AOI) in real time. This allows initiating various events or commands when a test person is wearing the head unit and the gaze is detected in an AOI. The commands can be used for interaction with different devices. Thus the tool for monitoring and analyzing gaze behavior becomes an interaction medium. With such a gaze control multi-modal interaction concepts could be realized. The projects primary aim was to give people with tetraplegia a mean of controlling devices in their home. The experimental set-up was a TV set that can be controlled by gaze.","PeriodicalId":405392,"journal":{"name":"PETMEI '11","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121592463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Semantic analysis of mobile eyetracking data 移动眼动数据的语义分析
Pub Date : 2011-09-18 DOI: 10.1145/2029956.2029958
J. Pelz
Researchers using laboratory-based eyetracking systems now have access to sophisticated data-analysis tools to reduce raw gaze data, but the huge data sets coming from wearable eyetrackers cannot be analyzed with the same tools. The lack of constraints that make mobile systems such powerful tools prevent the analysis tools designed for static or tracked observers from working with freely moving observers. Proposed solutions have included infrared markers hidden in the scene to provide reference points, Simultaneous Localization and Mapping (SLAM), and multi-view geometry techniques that build models from multiple views of a scene. These methods map fixations onto predefined or extracted 3D scene models, allowing traditional static-scene analysis tools to be used. Another approach to analysis of mobile eyetracking data is to code fixations with semantically meaningful labels rather than mapping the fixations to fixed 3D locations. This offers two important advantages over the model-based methods; semantic mapping allows coding of dynamic scenes without the need to explicitly track objects, and it provides an inherently flexible and extensible object-based coding scheme.
使用实验室眼动追踪系统的研究人员现在可以使用复杂的数据分析工具来减少原始凝视数据,但来自可穿戴眼动追踪器的大量数据集无法用相同的工具进行分析。缺乏约束使得移动系统成为如此强大的工具,这使得为静态或跟踪观察者设计的分析工具无法与自由移动的观察者一起工作。提出的解决方案包括隐藏在场景中的红外标记以提供参考点,同时定位和映射(SLAM),以及从场景的多个视图构建模型的多视图几何技术。这些方法将固定映射到预定义或提取的3D场景模型上,从而允许使用传统的静态场景分析工具。另一种分析移动眼球追踪数据的方法是用语义上有意义的标签对注视进行编码,而不是将注视映射到固定的3D位置。与基于模型的方法相比,这提供了两个重要的优势;语义映射允许在不需要显式跟踪对象的情况下对动态场景进行编码,并且它提供了一种固有的灵活和可扩展的基于对象的编码方案。
{"title":"Semantic analysis of mobile eyetracking data","authors":"J. Pelz","doi":"10.1145/2029956.2029958","DOIUrl":"https://doi.org/10.1145/2029956.2029958","url":null,"abstract":"Researchers using laboratory-based eyetracking systems now have access to sophisticated data-analysis tools to reduce raw gaze data, but the huge data sets coming from wearable eyetrackers cannot be analyzed with the same tools. The lack of constraints that make mobile systems such powerful tools prevent the analysis tools designed for static or tracked observers from working with freely moving observers.\u0000 Proposed solutions have included infrared markers hidden in the scene to provide reference points, Simultaneous Localization and Mapping (SLAM), and multi-view geometry techniques that build models from multiple views of a scene. These methods map fixations onto predefined or extracted 3D scene models, allowing traditional static-scene analysis tools to be used.\u0000 Another approach to analysis of mobile eyetracking data is to code fixations with semantically meaningful labels rather than mapping the fixations to fixed 3D locations. This offers two important advantages over the model-based methods; semantic mapping allows coding of dynamic scenes without the need to explicitly track objects, and it provides an inherently flexible and extensible object-based coding scheme.","PeriodicalId":405392,"journal":{"name":"PETMEI '11","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133706147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The research framework of eye-tracking based mobile device usability evaluation 基于眼动追踪的移动设备可用性评估研究框架
Pub Date : 2011-09-18 DOI: 10.1145/2029956.2029964
Shiwei Cheng
Eye-tracking is a valuable tool for mobile device usability research, but there are still many challenges about how to create good usability evaluation, such as the accurate enough eye-movement data from the small view angle on a real mobile device. This paper presents one research framework, which combines the remote eye-tracker and portable eye-tracker for both quantitive and qualitative evaluation. An example is reported in which a mobile device user interface is analyzed in on-screen simulation using a remote eye-tracker, and with the real device using a portable eye-tracker. We get the usability problem lists and design advices at the end. It illustrates the feasibility and effectiveness for the proposed research framework.
眼动追踪是移动设备可用性研究的重要工具,但如何建立良好的可用性评估仍然存在许多挑战,例如如何在真实移动设备的小视角下获得足够准确的眼动数据。本文提出了一种结合远程眼动仪和便携式眼动仪进行定量和定性评价的研究框架。本文报道了一个使用远程眼动仪对移动设备用户界面进行屏幕模拟和使用便携式眼动仪对真实设备进行屏幕模拟的实例。最后我们会得到可用性问题列表和设计建议。说明了所提出的研究框架的可行性和有效性。
{"title":"The research framework of eye-tracking based mobile device usability evaluation","authors":"Shiwei Cheng","doi":"10.1145/2029956.2029964","DOIUrl":"https://doi.org/10.1145/2029956.2029964","url":null,"abstract":"Eye-tracking is a valuable tool for mobile device usability research, but there are still many challenges about how to create good usability evaluation, such as the accurate enough eye-movement data from the small view angle on a real mobile device. This paper presents one research framework, which combines the remote eye-tracker and portable eye-tracker for both quantitive and qualitative evaluation. An example is reported in which a mobile device user interface is analyzed in on-screen simulation using a remote eye-tracker, and with the real device using a portable eye-tracker. We get the usability problem lists and design advices at the end. It illustrates the feasibility and effectiveness for the proposed research framework.","PeriodicalId":405392,"journal":{"name":"PETMEI '11","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124257920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Saliency-based image editing for guiding visual attention 基于显著性的图像编辑引导视觉注意
Pub Date : 2011-09-18 DOI: 10.1145/2029956.2029968
Aiko Hagiwara, A. Sugimoto, K. Kawamoto
The most important part of an information system that assists human activities is a natural interface with human beings. Gaze information strongly reflects the human interest or their attention, and thus, a gaze-based interface is promising for future usage. In particular, if we can smoothly guide the user's visual attention toward a target without interrupting their current visual attention, the usefulness of the gaze-based interface will be highly enhanced. To realize such an interface, this paper proposes a method for editing an image, when given a region in the image, to synthesize the image in which the region is most salient. Our method first computes a saliency map of a given image and then iteratively adjusts the intensity and color until the saliency inside the region becomes the highest for the entire image. Experimental results confirm that our image editing method naturally draws the human visual attention toward our specified region.
协助人类活动的信息系统最重要的部分是与人类的自然界面。凝视信息强烈地反映了人类的兴趣或他们的注意力,因此,基于凝视的界面在未来的使用中很有希望。特别是,如果我们能够在不打断用户当前视觉注意力的情况下,顺利地将用户的视觉注意力引导到一个目标上,那么基于注视的界面的有用性将得到极大的增强。为了实现这样的接口,本文提出了一种编辑图像的方法,当给定图像中的一个区域时,将该区域最突出的图像合成出来。我们的方法首先计算给定图像的显着性图,然后迭代调整强度和颜色,直到区域内的显着性成为整个图像的最高。实验结果证实,我们的图像编辑方法自然地将人类的视觉注意力吸引到我们指定的区域。
{"title":"Saliency-based image editing for guiding visual attention","authors":"Aiko Hagiwara, A. Sugimoto, K. Kawamoto","doi":"10.1145/2029956.2029968","DOIUrl":"https://doi.org/10.1145/2029956.2029968","url":null,"abstract":"The most important part of an information system that assists human activities is a natural interface with human beings. Gaze information strongly reflects the human interest or their attention, and thus, a gaze-based interface is promising for future usage. In particular, if we can smoothly guide the user's visual attention toward a target without interrupting their current visual attention, the usefulness of the gaze-based interface will be highly enhanced. To realize such an interface, this paper proposes a method for editing an image, when given a region in the image, to synthesize the image in which the region is most salient. Our method first computes a saliency map of a given image and then iteratively adjusts the intensity and color until the saliency inside the region becomes the highest for the entire image. Experimental results confirm that our image editing method naturally draws the human visual attention toward our specified region.","PeriodicalId":405392,"journal":{"name":"PETMEI '11","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114899601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
Combining gaze with manual interaction to extend physical reach 将凝视与手动交互相结合,以扩大身体接触
Pub Date : 2011-09-18 DOI: 10.1145/2029956.2029966
J. Turner, A. Bulling, Hans-Werner Gellersen
Situated public displays and interactive surfaces are becoming ubiquitous in our daily lives. Issues arise with these devices when attempting to interact over a distance or with content that is physically out of reach. In this paper we outline three techniques that combine gaze with manual hand-controlled input to move objects. We demonstrate and discuss how these techniques could be applied to two scenarios involving, (1) a multi-touch surface and (2) a public display and a mobile device.
公共展示和互动界面在我们的日常生活中变得无处不在。当尝试远距离交互或与物理上无法接触的内容交互时,这些设备就会出现问题。在本文中,我们概述了三种结合凝视和手动控制输入来移动物体的技术。我们演示并讨论了这些技术如何应用于两种场景,包括:(1)多点触摸表面和(2)公共显示器和移动设备。
{"title":"Combining gaze with manual interaction to extend physical reach","authors":"J. Turner, A. Bulling, Hans-Werner Gellersen","doi":"10.1145/2029956.2029966","DOIUrl":"https://doi.org/10.1145/2029956.2029966","url":null,"abstract":"Situated public displays and interactive surfaces are becoming ubiquitous in our daily lives. Issues arise with these devices when attempting to interact over a distance or with content that is physically out of reach. In this paper we outline three techniques that combine gaze with manual hand-controlled input to move objects. We demonstrate and discuss how these techniques could be applied to two scenarios involving, (1) a multi-touch surface and (2) a public display and a mobile device.","PeriodicalId":405392,"journal":{"name":"PETMEI '11","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125159407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Analysing EOG signal features for the discrimination of eye movements with wearable devices 基于可穿戴设备眼动识别的眼电信号特征分析
Pub Date : 2011-09-18 DOI: 10.1145/2029956.2029962
Mélodie Vidal, A. Bulling, Hans-Werner Gellersen
Eye tracking research in human-computer interaction and experimental psychology traditionally focuses on stationary devices and a small number of common eye movements. The advent of pervasive eye tracking promises new applications, such as eye-based mental health monitoring or eye-based activity and context recognition. These applications might require further research on additional eye movement types such as smooth pursuits and the vestibulo-ocular reflex as these movements have not been studied as extensively as saccades, fixations and blinks. In this paper we report our first step towards an effective discrimination of these movements. In a user study we collect naturalistic eye movements from 19 people using the two most common measurement techniques (EOG and IR-based). We develop a set of basic signal features that we extract from the collected eye movement data and show that a feature-based approach has the potential to discriminate between saccades, smooth pursuits, and vestibulo-ocular reflex movements.
传统上,人机交互和实验心理学的眼动追踪研究主要集中在固定设备和少数常见的眼球运动上。无处不在的眼动追踪技术的出现带来了新的应用前景,比如基于眼睛的心理健康监测或基于眼睛的活动和背景识别。这些应用可能需要进一步研究其他眼运动类型,如平滑追求和前庭眼反射,因为这些运动还没有像扫视、注视和眨眼那样得到广泛的研究。在本文中,我们报告了对这些运动进行有效区分的第一步。在一项用户研究中,我们使用两种最常见的测量技术(EOG和ir)收集了19个人的自然眼动。我们从收集的眼球运动数据中提取了一组基本信号特征,并表明基于特征的方法有可能区分扫视、平滑追求和前庭眼反射运动。
{"title":"Analysing EOG signal features for the discrimination of eye movements with wearable devices","authors":"Mélodie Vidal, A. Bulling, Hans-Werner Gellersen","doi":"10.1145/2029956.2029962","DOIUrl":"https://doi.org/10.1145/2029956.2029962","url":null,"abstract":"Eye tracking research in human-computer interaction and experimental psychology traditionally focuses on stationary devices and a small number of common eye movements. The advent of pervasive eye tracking promises new applications, such as eye-based mental health monitoring or eye-based activity and context recognition. These applications might require further research on additional eye movement types such as smooth pursuits and the vestibulo-ocular reflex as these movements have not been studied as extensively as saccades, fixations and blinks. In this paper we report our first step towards an effective discrimination of these movements. In a user study we collect naturalistic eye movements from 19 people using the two most common measurement techniques (EOG and IR-based). We develop a set of basic signal features that we extract from the collected eye movement data and show that a feature-based approach has the potential to discriminate between saccades, smooth pursuits, and vestibulo-ocular reflex movements.","PeriodicalId":405392,"journal":{"name":"PETMEI '11","volume":"96 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125983892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Eye tracking over small and large shopping displays 对大大小小的购物展示进行眼动追踪
Pub Date : 2011-09-18 DOI: 10.1145/2029956.2029970
C. Tonkin, A. Duchowski, Joshua Kahue, Paul Schiffgens, Frank Rischner
Consumers' visual behavior is compared when shopping for a product on simulated shelving displays of two different sizes: a 11.5 ft. projection canvas and a 15.4 in. laptop screen. Results are compared with search times obtained over virtual (projected) and physical shelves, where recorded search times indicate a tendency toward improved performance with larger displays. Implications for pervasive eye tracking systems indicate consideration of larger, realistic environments.
消费者在两种不同尺寸的模拟货架上购物时的视觉行为进行了比较:11.5英尺的投影帆布和15.4英寸的投影帆布。笔记本电脑屏幕。将结果与在虚拟(投影)和物理货架上获得的搜索时间进行比较,其中记录的搜索时间表明使用更大的显示器可以提高性能。无处不在的眼动追踪系统暗示了对更大、更现实环境的考虑。
{"title":"Eye tracking over small and large shopping displays","authors":"C. Tonkin, A. Duchowski, Joshua Kahue, Paul Schiffgens, Frank Rischner","doi":"10.1145/2029956.2029970","DOIUrl":"https://doi.org/10.1145/2029956.2029970","url":null,"abstract":"Consumers' visual behavior is compared when shopping for a product on simulated shelving displays of two different sizes: a 11.5 ft. projection canvas and a 15.4 in. laptop screen. Results are compared with search times obtained over virtual (projected) and physical shelves, where recorded search times indicate a tendency toward improved performance with larger displays. Implications for pervasive eye tracking systems indicate consideration of larger, realistic environments.","PeriodicalId":405392,"journal":{"name":"PETMEI '11","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123248693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Speed-accuracy trade-off in dwell-based eye pointing tasks at different cognitive levels 不同认知水平下基于驻留的眼睛指向任务的速度-准确性权衡
Pub Date : 2011-09-18 DOI: 10.1145/2029956.2029967
Xinyong Zhang, Pianpian Xu, Qing Zhang, H. Zha
In this paper, we present a target searching experiment to investigate how long is long enough to maintain the speed-accuracy trade-off in eye pointing tasks that use dwell time as the activation mechanism. The experimental task, which took account of three factors including cognitive complexity, dwell time and visual feedback mode, mixes visual search and target acquisition together. In other words, the subjects need to search for and recognize the target before the final selection in each trial. The results clarify the suitable ranges of dwell time for users to avoid wrong selections as possible as they can under different cognitive load conditions. We also discussed the implications for user interface designs.
在本文中,我们提出了一个目标搜索实验来研究在使用停留时间作为激活机制的眼睛指向任务中,多长时间足以维持速度和精度的权衡。实验任务将视觉搜索和目标获取相结合,考虑了认知复杂性、停留时间和视觉反馈方式三个因素。也就是说,在每次试验中,被试在最终选择目标之前,都需要对目标进行搜索和识别。研究结果明确了用户在不同认知负荷条件下的停留时间范围,以尽可能避免用户的错误选择。我们还讨论了对用户界面设计的影响。
{"title":"Speed-accuracy trade-off in dwell-based eye pointing tasks at different cognitive levels","authors":"Xinyong Zhang, Pianpian Xu, Qing Zhang, H. Zha","doi":"10.1145/2029956.2029967","DOIUrl":"https://doi.org/10.1145/2029956.2029967","url":null,"abstract":"In this paper, we present a target searching experiment to investigate how long is long enough to maintain the speed-accuracy trade-off in eye pointing tasks that use dwell time as the activation mechanism. The experimental task, which took account of three factors including cognitive complexity, dwell time and visual feedback mode, mixes visual search and target acquisition together. In other words, the subjects need to search for and recognize the target before the final selection in each trial. The results clarify the suitable ranges of dwell time for users to avoid wrong selections as possible as they can under different cognitive load conditions. We also discussed the implications for user interface designs.","PeriodicalId":405392,"journal":{"name":"PETMEI '11","volume":"291 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123378087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
PETMEI '11
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1