首页 > 最新文献

Eye Tracking Research & Application最新文献

英文 中文
Age differences in visual search for information on web pages 网页信息视觉搜索的年龄差异
Pub Date : 2004-03-22 DOI: 10.1145/968363.968379
S. Josephson, Michael E. Holmes
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail permissions@acm.org. © 2004 ACM 1-58113-825-3/04/0003 $5.00 Age Differences in Visual Search for Information on Web Pages
允许制作部分或全部作品的数字或硬拷贝供个人或课堂使用,但不收取任何费用,前提是副本不是出于商业利益而制作或分发的,并且副本在第一页上带有本通知和完整的引用。本作品组件的版权归ACM以外的其他人所有,必须得到尊重。允许有信用的摘要。以其他方式复制、重新发布、在服务器上发布或重新分发到列表,需要事先获得特定许可和/或付费。请联系ACM公司权限部,传真+1(212)8669 -0481或发邮件至permissions@acm.org。©2004 ACM 1-58113-825-3/04/0003 $5.00网页信息视觉搜索的年龄差异
{"title":"Age differences in visual search for information on web pages","authors":"S. Josephson, Michael E. Holmes","doi":"10.1145/968363.968379","DOIUrl":"https://doi.org/10.1145/968363.968379","url":null,"abstract":"Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail permissions@acm.org. © 2004 ACM 1-58113-825-3/04/0003 $5.00 Age Differences in Visual Search for Information on Web Pages","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116296003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Visual deictic reference in a collaborative virtual environment 协同虚拟环境中的视觉指示参考
Pub Date : 2004-03-22 DOI: 10.1145/968363.968369
A. Duchowski, Nathan Cournia, Brian Cumming, Daniel McCallum, A. Gramopadhye, J. Greenstein, Sajay Sadasivan, R. Tyrrell
This paper evaluates the use of Visual Deictic Reference (VDR) in Collaborative Virtual Environments (CVEs). A simple CVE capable of hosting two (or more) participants simultaneously immersed in the same virtual environment is used as the testbed. One participant's VDR, obtained by tracking the participant's gaze, is projected to co-participants' environments in real-time as a colored lightspot. We compare the VDR lightspot when it is eye-slaved to when it is head-slaved and show that an eye-slaved VDR helps disambiguate the deictic point of reference, especially during conditions when the user's line of sight is decoupled from their head direction.
本文评估了视觉指示参考(VDR)在协同虚拟环境(cve)中的应用。使用一个简单的CVE作为测试平台,该CVE能够同时托管沉浸在同一虚拟环境中的两个(或更多)参与者。通过跟踪参与者的目光获得的一个参与者的VDR,以彩色光点的形式实时投影到其他参与者的环境中。我们比较了眼睛从属和头部从属时的VDR光点,并表明眼睛从属的VDR有助于消除指示参考点的歧义,特别是在用户的视线与头部方向分离的情况下。
{"title":"Visual deictic reference in a collaborative virtual environment","authors":"A. Duchowski, Nathan Cournia, Brian Cumming, Daniel McCallum, A. Gramopadhye, J. Greenstein, Sajay Sadasivan, R. Tyrrell","doi":"10.1145/968363.968369","DOIUrl":"https://doi.org/10.1145/968363.968369","url":null,"abstract":"This paper evaluates the use of Visual Deictic Reference (VDR) in Collaborative Virtual Environments (CVEs). A simple CVE capable of hosting two (or more) participants simultaneously immersed in the same virtual environment is used as the testbed. One participant's VDR, obtained by tracking the participant's gaze, is projected to co-participants' environments in real-time as a colored lightspot. We compare the VDR lightspot when it is eye-slaved to when it is head-slaved and show that an eye-slaved VDR helps disambiguate the deictic point of reference, especially during conditions when the user's line of sight is decoupled from their head direction.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127252926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Head movement estimation for wearable eye tracker 基于可穿戴眼动仪的头部运动估计
Pub Date : 2004-03-22 DOI: 10.1145/968363.968388
C. Rothkopf, J. Pelz
In the study of eye movements in natural tasks, where subjects are able to freely move in their environment, it is desirable to capture a video of the surroundings of the subject not limited to a small field of view as obtained by the scene camera of an eye tracker. Moreover, recovering the head movements could give additional information about the type of eye movement that was carried out, the overall gaze change in world coordinates, and insight into high-order perceptual strategies. Algorithms for the classification of eye movements in such natural tasks could also benefit form the additional head movement data.We propose to use an omnidirectional vision sensor consisting of a small CCD video camera and a hyperbolic mirror. The camera is mounted on an ASL eye tracker and records an image sequence at 60 Hz. Several algorithms for the extraction of rotational motion from this image sequence were implemented and compared in their performance against the measurements of a Fasttrack magnetic tracking system. Using data from the eye tracker together with the data obtained by the omnidirectional image sensor, a new algorithm for the classification of different types of eye movements based on a Hidden-Markov-Model was developed.
在研究自然任务中的眼动时,受试者能够在其环境中自由移动,因此希望捕捉受试者周围环境的视频,而不限于眼动仪的现场摄像机所获得的小视场。此外,恢复头部运动可以提供有关所进行的眼球运动类型的额外信息,世界坐标的整体凝视变化,以及对高阶感知策略的洞察。在这种自然任务中,对眼球运动进行分类的算法也可以从额外的头部运动数据中受益。我们建议使用由小型CCD摄像机和双曲镜组成的全向视觉传感器。摄像头安装在ASL眼动仪上,记录60赫兹的图像序列。实现了几种从该图像序列中提取旋转运动的算法,并将其性能与Fasttrack磁跟踪系统的测量结果进行了比较。利用眼动仪数据和全向图像传感器数据,提出了一种基于隐马尔可夫模型的不同类型眼动分类算法。
{"title":"Head movement estimation for wearable eye tracker","authors":"C. Rothkopf, J. Pelz","doi":"10.1145/968363.968388","DOIUrl":"https://doi.org/10.1145/968363.968388","url":null,"abstract":"In the study of eye movements in natural tasks, where subjects are able to freely move in their environment, it is desirable to capture a video of the surroundings of the subject not limited to a small field of view as obtained by the scene camera of an eye tracker. Moreover, recovering the head movements could give additional information about the type of eye movement that was carried out, the overall gaze change in world coordinates, and insight into high-order perceptual strategies. Algorithms for the classification of eye movements in such natural tasks could also benefit form the additional head movement data.We propose to use an omnidirectional vision sensor consisting of a small CCD video camera and a hyperbolic mirror. The camera is mounted on an ASL eye tracker and records an image sequence at 60 Hz. Several algorithms for the extraction of rotational motion from this image sequence were implemented and compared in their performance against the measurements of a Fasttrack magnetic tracking system. Using data from the eye tracker together with the data obtained by the omnidirectional image sensor, a new algorithm for the classification of different types of eye movements based on a Hidden-Markov-Model was developed.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133754634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
ECSGlasses and EyePliances: using attention to open sociable windows of interaction ECSGlasses和EyePliances:用注意力打开社交互动的窗口
Pub Date : 2004-03-22 DOI: 10.1145/968363.968384
Jeffrey S. Shell, Roel Vertegaal, D. Cheng, Alexander W. Skaburskis, Changuk Sohn, A. James Stewart, Omar Aoudeh, C. Dickie
We present ECSGlasses: wearable eye contact sensing glasses that detect human eye contact. ECSGlasses report eye contact to digital devices, appliances and EyePliances in the user's attention space. Devices use this attentional cue to engage in a more sociable process of turn taking with users. This has the potential to reduce inappropriate intrusions, and limit their disruptiveness. We describe new prototype systems, including the Attentive Messaging Service (AMS), the Attentive Hit Counter, the first person attentive camcorder eyeBlog, and an updated Attentive Cell Phone. We also discuss the potential of these devices to open new windows of interaction using attention as a communication modality. Further, we present a novel signal-encoding scheme to uniquely identify EyePliances and users wearing ECSGlasses in multiparty scenarios.
我们提出了ECSGlasses:可穿戴的眼睛接触传感眼镜,检测人类的眼睛接触。ECSGlasses将目光接触报告给用户注意空间中的数字设备、电器和EyePliances。设备利用这种注意力线索与用户进行更社会化的互动。这有可能减少不适当的入侵,并限制其破坏性。我们描述了新的原型系统,包括专注短信服务(AMS),专注点击计数器,第一人称专注摄像机eyeBlog和更新的专注手机。我们还讨论了这些设备打开新的交互窗口的潜力,将注意力作为一种沟通方式。此外,我们提出了一种新的信号编码方案,用于在多方场景中唯一识别eyepliance和佩戴ECSGlasses的用户。
{"title":"ECSGlasses and EyePliances: using attention to open sociable windows of interaction","authors":"Jeffrey S. Shell, Roel Vertegaal, D. Cheng, Alexander W. Skaburskis, Changuk Sohn, A. James Stewart, Omar Aoudeh, C. Dickie","doi":"10.1145/968363.968384","DOIUrl":"https://doi.org/10.1145/968363.968384","url":null,"abstract":"We present ECSGlasses: wearable eye contact sensing glasses that detect human eye contact. ECSGlasses report eye contact to digital devices, appliances and EyePliances in the user's attention space. Devices use this attentional cue to engage in a more sociable process of turn taking with users. This has the potential to reduce inappropriate intrusions, and limit their disruptiveness. We describe new prototype systems, including the Attentive Messaging Service (AMS), the Attentive Hit Counter, the first person attentive camcorder eyeBlog, and an updated Attentive Cell Phone. We also discuss the potential of these devices to open new windows of interaction using attention as a communication modality. Further, we present a novel signal-encoding scheme to uniquely identify EyePliances and users wearing ECSGlasses in multiparty scenarios.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133217796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Poster abstract: evaluation of hidden Markov models robustness in uncovering focus of visual attention from noisy eye-tracker data 海报摘要:隐马尔可夫模型在嘈杂眼动仪数据中发现视觉注意焦点的鲁棒性评价
Pub Date : 2004-03-22 DOI: 10.1145/968363.968373
Neil Cooke, M. Russell, A. Meyer
Eye position, captured via an eye tracker, can uncover the focus of visual attention by classifying eye movements into fixations, pursuit or saccades [Duchowski 2003], with the former two indicating foci of visual attention. Such classification requires all other variability in eye tracking data, from sensor error to other eye movements (such as microsaccades, nystagmus and drifts) to accounted for effectively. The hidden Markov model provides a useful way of uncovering focus of visual attention from eye position when the user undertakes visually oriented tasks, allowing variability in eye tracking data to be modelled as a random variable.
通过眼动仪捕捉眼球位置,可以通过将眼球运动分为注视、追求或扫视来揭示视觉注意的焦点[Duchowski 2003],前两种运动表明视觉注意的焦点。这种分类需要有效地考虑眼动追踪数据中的所有其他可变性,从传感器误差到其他眼动(如微跳、眼球震颤和漂移)。隐马尔可夫模型提供了一种有用的方法,当用户进行视觉导向任务时,从眼睛位置发现视觉注意力的焦点,允许眼动追踪数据的可变性被建模为随机变量。
{"title":"Poster abstract: evaluation of hidden Markov models robustness in uncovering focus of visual attention from noisy eye-tracker data","authors":"Neil Cooke, M. Russell, A. Meyer","doi":"10.1145/968363.968373","DOIUrl":"https://doi.org/10.1145/968363.968373","url":null,"abstract":"Eye position, captured via an eye tracker, can uncover the focus of visual attention by classifying eye movements into fixations, pursuit or saccades [Duchowski 2003], with the former two indicating foci of visual attention. Such classification requires all other variability in eye tracking data, from sensor error to other eye movements (such as microsaccades, nystagmus and drifts) to accounted for effectively. The hidden Markov model provides a useful way of uncovering focus of visual attention from eye position when the user undertakes visually oriented tasks, allowing variability in eye tracking data to be modelled as a random variable.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115375437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Eye tracking off the shelf 眼球追踪已经上架了
Pub Date : 2004-03-22 DOI: 10.1145/968363.968375
D. Hansen, D. MacKay, J. P. Hansen, M. Nielsen
What if eye trackers could be downloaded and used immediately with standard cameras connected to a computer, without the need for an expert to setup the system? This has already the case for head trackers, so why not for eye trackers?Using components off-the-shelf (COTS) for camera-based eye tracking tasks has many advantages, but it certainly introduces several new problems as less assumptions on the system can be made. As a consequence of using COTS the price for eye tracking devices can be reduced while increasing the accessibility of these systems. Eye tracking based on COTS holds potential for a large number of possible applications such as in the games industry and eye typing [Majaranta and Räihä 2002]. Different cameras may be purchased depending on the need and the amount of money the user is willing to spend on the camera. In this framework it is not possible to use IR light sources and other novel engineered devices as they cannot be bought in a common hardware store. Very little control over the cameras and the geometry of the setup can be expected. The methods employed for eye tracking should therefore be able to handle changes in light conditions and image defocusing and scale changes [Hansen and Pece 2003]. On the same token pan-and-tilt cameras cannot be used, thus forcing such systems to be passive. Figure 1 shows a possible setup of a COTS-based eye tracker. When designing systems for the general public, it is unrealistic to assume that people are able to do camera calibration and make accurate setups of camera, monitor and user. Since little is known about the setup, would this then require a vast amount of calibration points needed for gaze estimation? That is, how many calibration points are really needed? Obviously the more calibration points are used the better the chances are to be able to infer the mapping from the image to gaze direction. It would even be possible to sample the entire function space provided sufficiently many calibration points are given. From the point of view of the users, a low number of calibration points is preferred as calibration may be considered as a tedious procedure. Systems that require many calibration points for every session are therefore not likely to succeed. It is also important to know the accuracy in gaze determination when using COTS to determine their applicability for various tasks.
如果眼动仪可以下载并立即与连接到电脑上的标准摄像头一起使用,而不需要专家来设置系统,那会怎么样?这已经适用于头部追踪器,那么为什么不适用眼动追踪器呢?在基于摄像头的眼动追踪任务中使用现成组件(COTS)有很多优点,但由于对系统的假设较少,它肯定会引入一些新问题。由于使用COTS,眼动追踪设备的价格可以降低,同时增加了这些系统的可访问性。基于COTS的眼动追踪具有大量潜在应用的潜力,例如在游戏行业和眼睛输入[Majaranta and Räihä 2002]。根据需要和用户愿意在相机上花费的金额,可以购买不同的相机。在这个框架中,不可能使用红外光源和其他新颖的工程设备,因为它们不能在普通的五金店买到。对相机和几何设置的控制很少,可以预期。因此,眼动追踪所采用的方法应该能够处理光线条件的变化、图像散焦和比例变化[Hansen and Pece 2003]。出于同样的原因,平移和倾斜相机不能使用,从而迫使这种系统是被动的。图1显示了基于cots的眼动仪的可能设置。在为大众设计系统时,假设人们能够进行摄像机校准并准确设置摄像机,监视器和用户是不现实的。由于对设置知之甚少,这是否需要大量的凝视估计所需的校准点?也就是说,到底需要多少个校准点?显然,使用的校准点越多,就越有可能推断出从图像到凝视方向的映射。如果给出足够多的校准点,甚至可以对整个函数空间进行采样。从用户的角度来看,较少的校准点是最好的,因为校准可能被认为是一个繁琐的过程。因此,每次会话都需要许多校准点的系统不太可能成功。在使用COTS确定其对各种任务的适用性时,了解注视确定的准确性也很重要。
{"title":"Eye tracking off the shelf","authors":"D. Hansen, D. MacKay, J. P. Hansen, M. Nielsen","doi":"10.1145/968363.968375","DOIUrl":"https://doi.org/10.1145/968363.968375","url":null,"abstract":"What if eye trackers could be downloaded and used immediately with standard cameras connected to a computer, without the need for an expert to setup the system? This has already the case for head trackers, so why not for eye trackers?Using components off-the-shelf (COTS) for camera-based eye tracking tasks has many advantages, but it certainly introduces several new problems as less assumptions on the system can be made. As a consequence of using COTS the price for eye tracking devices can be reduced while increasing the accessibility of these systems. Eye tracking based on COTS holds potential for a large number of possible applications such as in the games industry and eye typing [Majaranta and Räihä 2002]. Different cameras may be purchased depending on the need and the amount of money the user is willing to spend on the camera. In this framework it is not possible to use IR light sources and other novel engineered devices as they cannot be bought in a common hardware store. Very little control over the cameras and the geometry of the setup can be expected. The methods employed for eye tracking should therefore be able to handle changes in light conditions and image defocusing and scale changes [Hansen and Pece 2003]. On the same token pan-and-tilt cameras cannot be used, thus forcing such systems to be passive. Figure 1 shows a possible setup of a COTS-based eye tracker. When designing systems for the general public, it is unrealistic to assume that people are able to do camera calibration and make accurate setups of camera, monitor and user. Since little is known about the setup, would this then require a vast amount of calibration points needed for gaze estimation? That is, how many calibration points are really needed? Obviously the more calibration points are used the better the chances are to be able to infer the mapping from the image to gaze direction. It would even be possible to sample the entire function space provided sufficiently many calibration points are given. From the point of view of the users, a low number of calibration points is preferred as calibration may be considered as a tedious procedure. Systems that require many calibration points for every session are therefore not likely to succeed. It is also important to know the accuracy in gaze determination when using COTS to determine their applicability for various tasks.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124835873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Coordination of component mental operations in a multiple-response task 多反应任务中各部分心理操作的协调
Pub Date : 2004-03-22 DOI: 10.1145/968363.968380
Shu-Chieh Wu, R. Remington
Models of human performance typically focus on the mental components of task processing from discrete task trials. This approach neglects the advance planning of actions and overlapping of tasks characteristic of natural settings. The present research measures the joint timing of eye movements and manual responses in a typing-like task with the goal of extending models of discrete task performance to continuous domains. Following Pashler [1994] participants made separate choice responses to a series of five letters spread over a wide viewing area. Replicating Pashler's results, significant preview effects were found in both response time and eye movement data. Response to the first stimulus was delayed, with inter-response intervals for subsequent items rapid and flat across items. The eyes moved toward the next letter about 800 ms before the corresponding manual response (eye-hand span). Fixation dwell time was affected by stimulus luminance as well as difficulty of response mapping. The results suggest that fixation duration entails more than perceptual analyses. Implications of the results are discussed.
人类表现的模型通常关注于离散任务试验中任务处理的心理成分。这种方法忽略了行动的预先规划和自然环境中任务的重叠特征。本研究测量了类打字任务中眼动和手动反应的联合计时,目的是将离散任务表现模型扩展到连续领域。根据Pashler[1994]的研究,参与者在一个广阔的观察区域内对一系列五个字母进行单独的选择。重复Pashler的结果,在反应时间和眼动数据中都发现了显著的预览效应。被试对第一个刺激的反应是延迟的,对后续刺激的反应间隔是快速而平稳的。眼睛移向下一个字母的时间比相应的手动反应(眼-手跨度)早800毫秒。注视停留时间受刺激亮度和反应映射难度的影响。结果表明,注视时间不仅仅是对知觉的分析。讨论了研究结果的意义。
{"title":"Coordination of component mental operations in a multiple-response task","authors":"Shu-Chieh Wu, R. Remington","doi":"10.1145/968363.968380","DOIUrl":"https://doi.org/10.1145/968363.968380","url":null,"abstract":"Models of human performance typically focus on the mental components of task processing from discrete task trials. This approach neglects the advance planning of actions and overlapping of tasks characteristic of natural settings. The present research measures the joint timing of eye movements and manual responses in a typing-like task with the goal of extending models of discrete task performance to continuous domains. Following Pashler [1994] participants made separate choice responses to a series of five letters spread over a wide viewing area. Replicating Pashler's results, significant preview effects were found in both response time and eye movement data. Response to the first stimulus was delayed, with inter-response intervals for subsequent items rapid and flat across items. The eyes moved toward the next letter about 800 ms before the corresponding manual response (eye-hand span). Fixation dwell time was affected by stimulus luminance as well as difficulty of response mapping. The results suggest that fixation duration entails more than perceptual analyses. Implications of the results are discussed.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126529245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
The determinants of web page viewing behavior: an eye-tracking study 网页浏览行为的决定因素:一项眼球追踪研究
Pub Date : 2004-03-22 DOI: 10.1145/968363.968391
Bing Pan, H. Hembrooke, Geri Gay, Laura A. Granka, Matthew K. Feusner, Jill K. Newman
The World Wide Web has become a ubiquitous information source and communication channel. With such an extensive user population, it is imperative to understand how web users view different web pages. Based on an eye tracking study of 30 subjects on 22 web pages from 11 popular web sites, this research intends to explore the determinants of ocular behavior on a single web page: whether it is determined by individual differences of the subjects, different types of web sites, the order of web pages being viewed, or the task at hand. The results indicate that gender of subjects, the viewing order of a web page, and the interaction between page order and site type influences online ocular behavior. Task instruction did not significantly affect web viewing behavior. Scanpath analysis revealed that the complexity of web page design influences the degree of scanpath variation among different subjects on the same web page. The contributions and limitations of this research, and future research directions are discussed.
万维网已经成为无处不在的信息来源和交流渠道。面对如此庞大的用户群,了解网络用户如何浏览不同的网页是非常必要的。本研究通过对30名被试在11个热门网站的22个网页上进行眼动追踪研究,旨在探讨单个网页上的眼动行为的决定因素:是否由被试的个体差异、不同类型的网站、浏览网页的顺序或手头的任务决定。结果表明,被试的性别、网页的浏览顺序以及网页顺序与网站类型的交互作用对网络视觉行为有影响。任务指导对网络浏览行为没有显著影响。扫描路径分析表明,网页设计的复杂性会影响同一网页上不同主题之间的扫描路径变化程度。讨论了本研究的贡献和局限性,并展望了未来的研究方向。
{"title":"The determinants of web page viewing behavior: an eye-tracking study","authors":"Bing Pan, H. Hembrooke, Geri Gay, Laura A. Granka, Matthew K. Feusner, Jill K. Newman","doi":"10.1145/968363.968391","DOIUrl":"https://doi.org/10.1145/968363.968391","url":null,"abstract":"The World Wide Web has become a ubiquitous information source and communication channel. With such an extensive user population, it is imperative to understand how web users view different web pages. Based on an eye tracking study of 30 subjects on 22 web pages from 11 popular web sites, this research intends to explore the determinants of ocular behavior on a single web page: whether it is determined by individual differences of the subjects, different types of web sites, the order of web pages being viewed, or the task at hand. The results indicate that gender of subjects, the viewing order of a web page, and the interaction between page order and site type influences online ocular behavior. Task instruction did not significantly affect web viewing behavior. Scanpath analysis revealed that the complexity of web page design influences the degree of scanpath variation among different subjects on the same web page. The contributions and limitations of this research, and future research directions are discussed.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124572422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 381
A gaze contingent environment for fostering social attention in autistic children 培养自闭症儿童社会注意力的注视偶然环境
Pub Date : 2004-03-22 DOI: 10.1145/968363.968367
R. Ramloll, C. Trepagnier, M. Sebrechts, A. Finkelmeyer
This paper documents the engineering of a gaze contingent therapeutic environment for the exploration and validation of a proposed rehabilitative technique addressing attention deficits in 24 to 54 months old autistic subjects. It discusses the current state of progress and lessons learnt so far while highlighting the outstanding engineering challenges of this project. We focus on calibration issues for this target group of users, explain the architecture of the system and present our general workflow for the construction of the gaze contingent environment. While this work is being undertaken for therapeutic purposes, it is likely to be relevant to the construction of gaze contingent displays for entertainment.
本文记录了一个凝视随机治疗环境的工程,用于探索和验证一种针对24至54个月大自闭症受试者的注意力缺陷的拟议康复技术。它讨论了目前的进展状况和迄今为止吸取的教训,同时强调了该项目面临的突出工程挑战。我们专注于这一目标用户群的校准问题,解释系统的架构,并介绍我们构建凝视随机环境的一般工作流程。虽然这项工作是为了治疗目的而进行的,但它很可能与娱乐的注视偶然显示的构建有关。
{"title":"A gaze contingent environment for fostering social attention in autistic children","authors":"R. Ramloll, C. Trepagnier, M. Sebrechts, A. Finkelmeyer","doi":"10.1145/968363.968367","DOIUrl":"https://doi.org/10.1145/968363.968367","url":null,"abstract":"This paper documents the engineering of a gaze contingent therapeutic environment for the exploration and validation of a proposed rehabilitative technique addressing attention deficits in 24 to 54 months old autistic subjects. It discusses the current state of progress and lessons learnt so far while highlighting the outstanding engineering challenges of this project. We focus on calibration issues for this target group of users, explain the architecture of the system and present our general workflow for the construction of the gaze contingent environment. While this work is being undertaken for therapeutic purposes, it is likely to be relevant to the construction of gaze contingent displays for entertainment.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115306599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Focus of attention and pilot error 注意力集中和飞行员失误
Pub Date : 2004-03-22 DOI: 10.1145/968363.968377
E. Hanson
The evolution of cockpit automation is associated with an increased criticality of human error because missing, ignoring, or incorrectly processing even the smallest bit of relevant information can lead to an aircraft incident or accident occurrence. The most important factors associated with such occurrences are focus of attention and pilot error. Research performed at the National Aerospace Laboratory (NLR) has shown that changes in focus of attention can be measured via an eye tracking system (ASL 4000SU). The aim of this paper is to discuss how eye movements are used to indicate focus of attention, and how such information can be used to design new cockpit displays with decreased chances of pilot error.
驾驶舱自动化的发展与人为错误的严重性增加有关,因为即使是最小的相关信息的丢失,忽略或错误处理也可能导致飞机事故或事故发生。与此类事件相关的最重要因素是注意力不集中和飞行员失误。美国国家航空航天实验室(NLR)的研究表明,注意力焦点的变化可以通过眼动追踪系统(ASL 4000SU)来测量。本文的目的是讨论如何使用眼球运动来指示注意力的焦点,以及如何使用这些信息来设计新的驾驶舱显示器,以减少飞行员犯错的机会。
{"title":"Focus of attention and pilot error","authors":"E. Hanson","doi":"10.1145/968363.968377","DOIUrl":"https://doi.org/10.1145/968363.968377","url":null,"abstract":"The evolution of cockpit automation is associated with an increased criticality of human error because missing, ignoring, or incorrectly processing even the smallest bit of relevant information can lead to an aircraft incident or accident occurrence. The most important factors associated with such occurrences are focus of attention and pilot error. Research performed at the National Aerospace Laboratory (NLR) has shown that changes in focus of attention can be measured via an eye tracking system (ASL 4000SU). The aim of this paper is to discuss how eye movements are used to indicate focus of attention, and how such information can be used to design new cockpit displays with decreased chances of pilot error.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128295669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
期刊
Eye Tracking Research & Application
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1