首页 > 最新文献

Eye Tracking Research & Application最新文献

英文 中文
Saccade contingent updating in virtual reality 虚拟现实中的视跳随机更新
Pub Date : 2002-03-25 DOI: 10.1145/507072.507092
J. Triesch, Brian T. Sullivan, M. Hayhoe, D. Ballard
We are interested in saccade contingent scene updates where the visual information presented in a display is altered while a saccadic eye movement of an unconstrained, freely moving observer is in progress. Since saccades typically last only several tens of milliseconds depending on their size, this poses dif cult constraints on the latency of detection. We have integrated two complementary eye trackers in a virtual reality helmet to simultaneously 1) detect saccade onsets with very low latency and 2) track the gaze with high precision albeit higher latency. In a series of experiments we demonstrate the system s capability of detecting saccade onsets with suf ciently low latency to make scene changes while a saccade is still progressing. While the method was developed to facilitate studies of human visual perception and attention, it may nd interesting applications in human-computer interaction and computer graphics.
我们感兴趣的是,当一个不受约束、自由移动的观察者的跳眼运动正在进行时,显示中的视觉信息发生了变化。由于视跳通常只持续几十毫秒,这取决于它们的大小,这对检测的延迟造成了困难的限制。我们在虚拟现实头盔中集成了两个互补的眼动追踪器,可以同时1)以极低的延迟检测眼跳发作,2)以高精度跟踪凝视,尽管延迟较高。在一系列的实验中,我们证明了该系统能够以足够低的延迟检测眼动的发作,从而在眼动仍在进行时做出场景改变。虽然该方法的发展是为了促进人类视觉感知和注意力的研究,但它可能在人机交互和计算机图形学方面有有趣的应用。
{"title":"Saccade contingent updating in virtual reality","authors":"J. Triesch, Brian T. Sullivan, M. Hayhoe, D. Ballard","doi":"10.1145/507072.507092","DOIUrl":"https://doi.org/10.1145/507072.507092","url":null,"abstract":"We are interested in saccade contingent scene updates where the visual information presented in a display is altered while a saccadic eye movement of an unconstrained, freely moving observer is in progress. Since saccades typically last only several tens of milliseconds depending on their size, this poses dif cult constraints on the latency of detection. We have integrated two complementary eye trackers in a virtual reality helmet to simultaneously 1) detect saccade onsets with very low latency and 2) track the gaze with high precision albeit higher latency. In a series of experiments we demonstrate the system s capability of detecting saccade onsets with suf ciently low latency to make scene changes while a saccade is still progressing. While the method was developed to facilitate studies of human visual perception and attention, it may nd interesting applications in human-computer interaction and computer graphics.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127278928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Differences in the infrared bright pupil response of human eyes 人眼红外明亮瞳孔反应的差异
Pub Date : 2002-03-25 DOI: 10.1145/507072.507099
K. Nguyen, Cindy Wagner, David B. Koons, M. Flickner
In this paper, we describe experiments conducted to explain observed differences in the bright pupil response of human eyes. Many people observe the bright pupil response as the red-eye effect when taking flash photography. However, there is significant variation in the magnitude of the bright pupil response across the population. Since many commercial gaze-tracking systems use the infrared bright pupil response for eye detection, a clear understanding of the magnitude and cause of the bright pupil variation gives critical insight into the robustness of gaze tracking systems. This paper documents studies we have conducted to measure the bright pupil differences using infrared light and hypothesis factors that lead to these differences.
在本文中,我们描述了进行的实验,以解释观察到的人眼明亮瞳孔反应的差异。许多人在使用闪光灯拍照时,将明亮的瞳孔反应视为红眼效应。然而,在人群中,明亮瞳孔反应的大小有显著差异。由于许多商业注视跟踪系统使用红外亮瞳响应进行眼睛检测,因此清楚地了解亮瞳变化的幅度和原因可以深入了解注视跟踪系统的鲁棒性。本文记录了我们使用红外光测量明亮瞳孔差异的研究以及导致这些差异的假设因素。
{"title":"Differences in the infrared bright pupil response of human eyes","authors":"K. Nguyen, Cindy Wagner, David B. Koons, M. Flickner","doi":"10.1145/507072.507099","DOIUrl":"https://doi.org/10.1145/507072.507099","url":null,"abstract":"In this paper, we describe experiments conducted to explain observed differences in the bright pupil response of human eyes. Many people observe the bright pupil response as the red-eye effect when taking flash photography. However, there is significant variation in the magnitude of the bright pupil response across the population. Since many commercial gaze-tracking systems use the infrared bright pupil response for eye detection, a clear understanding of the magnitude and cause of the bright pupil variation gives critical insight into the robustness of gaze tracking systems. This paper documents studies we have conducted to measure the bright pupil differences using infrared light and hypothesis factors that lead to these differences.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128215330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
What do the eyes behold for human-computer interaction? 在人机交互中,眼睛看到的是什么?
Pub Date : 2002-03-25 DOI: 10.1145/507072.507084
Roel Vertegaal
In recent years, there has been a resurgence of interest in the use of eye tracking systems for interactive purposes. However, it is easy to be fooled by the interactive power of eye tracking. When first encountering eye based interaction, most people are genuinely impressed with the almost magical window into the mind of the user that it seems to provide. There are two reasons why this belief may lead to subsequent disappointment. Firstly, although current eye tracking equipment is far superior to that used in the seventies and early eighties, it is by no means perfect. For example, there is still the tradeoff between the use of an obtrusive head-based system or a desk-based system with limited head movement. Such technical problems continue to limit the usefulness of eye tracking as a generic form of input. Secondly, there are real methodological problems regarding the interpretation of eye input for use in graphical user interfaces. One example, the "Midas Touch" problem, is observed in systems that use eye movements to directly control a mouse cursor. When does the system decide that a user is interested in a visual object? Systems that implement dwell time for this purpose run the risk of disallowing visual scanning behavior, requiring users to control their eye movements for the purposes of output, rather than input. However, difficulties in the interpretation of visual interest remain even when systems use another input modality for signaling intent. Another classic methodological problem is exemplified by the application of eye movement recording in usability studies. Although eye fixations provide some of the best measures of visual interest, they do not provide a measure of cognitive interest. It is one thing to determine whether a user has observed certain visual information, but quite another to determine whether this information has in fact been processed or understood. Some of our technological problems can and will be solved. However, we believe that our methodological issues point to a more fundamental problem: What is the nature of the input information conveyed by eye movements and to what interactive functions can this information provide added value?
近年来,人们对将眼动追踪系统用于互动目的的兴趣重新燃起。然而,人们很容易被眼动追踪的互动能力所欺骗。当第一次接触到基于眼睛的交互时,大多数人都对它提供的几乎是进入用户心灵的神奇窗口印象深刻。这种信念可能导致随后的失望有两个原因。首先,虽然目前的眼动追踪设备远远优于70年代和80年代初使用的设备,但它绝不是完美的。例如,在使用突兀的头部系统或头部运动受限的桌面系统之间仍然存在权衡。这些技术问题继续限制眼动追踪作为一种通用输入形式的有效性。其次,对于在图形用户界面中使用的眼睛输入的解释存在真正的方法问题。例如,在使用眼球运动直接控制鼠标光标的系统中可以观察到“点石成金”问题。系统何时判断用户对视觉对象感兴趣?为这一目的而实现停留时间的系统存在不允许视觉扫描行为的风险,要求用户控制他们的眼球运动以达到输出而不是输入的目的。然而,即使系统使用另一种输入方式来表示意图,在解释视觉兴趣方面仍然存在困难。另一个经典的方法论问题是眼动记录在可用性研究中的应用。虽然注视提供了一些视觉兴趣的最佳衡量标准,但它们并不能提供认知兴趣的衡量标准。确定用户是否观察到某些视觉信息是一回事,而确定用户是否处理或理解了这些信息则完全是另一回事。我们的一些技术问题能够而且将会得到解决。然而,我们相信我们的方法论问题指向了一个更根本的问题:眼动传递的输入信息的本质是什么?这些信息可以为哪些交互功能提供附加价值?
{"title":"What do the eyes behold for human-computer interaction?","authors":"Roel Vertegaal","doi":"10.1145/507072.507084","DOIUrl":"https://doi.org/10.1145/507072.507084","url":null,"abstract":"In recent years, there has been a resurgence of interest in the use of eye tracking systems for interactive purposes. However, it is easy to be fooled by the interactive power of eye tracking. When first encountering eye based interaction, most people are genuinely impressed with the almost magical window into the mind of the user that it seems to provide. There are two reasons why this belief may lead to subsequent disappointment. Firstly, although current eye tracking equipment is far superior to that used in the seventies and early eighties, it is by no means perfect. For example, there is still the tradeoff between the use of an obtrusive head-based system or a desk-based system with limited head movement. Such technical problems continue to limit the usefulness of eye tracking as a generic form of input. Secondly, there are real methodological problems regarding the interpretation of eye input for use in graphical user interfaces. One example, the \"Midas Touch\" problem, is observed in systems that use eye movements to directly control a mouse cursor. When does the system decide that a user is interested in a visual object? Systems that implement dwell time for this purpose run the risk of disallowing visual scanning behavior, requiring users to control their eye movements for the purposes of output, rather than input. However, difficulties in the interpretation of visual interest remain even when systems use another input modality for signaling intent. Another classic methodological problem is exemplified by the application of eye movement recording in usability studies. Although eye fixations provide some of the best measures of visual interest, they do not provide a measure of cognitive interest. It is one thing to determine whether a user has observed certain visual information, but quite another to determine whether this information has in fact been processed or understood. Some of our technological problems can and will be solved. However, we believe that our methodological issues point to a more fundamental problem: What is the nature of the input information conveyed by eye movements and to what interactive functions can this information provide added value?","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133428265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A software-based eye tracking system for the study of air-traffic displays 一种基于软件的眼动追踪系统,用于研究空中交通显示
Pub Date : 2002-03-25 DOI: 10.1145/507072.507087
J. Mulligan
This paper describes a software-based system for offline tracking of eye and head movements using stored video images, designed for use in the study of air-traffic displays. These displays are typically dense with information; to address the research questions, we wish to be able to localize gaze within a single word within a line of text (a few minutes of arc), while at the same time allowing some freedom of movement to the subject. Accurate gaze tracking in the presence of head movements requires high precision head tracking, and this was accomplished by registration of images from a forward-looking scene camera with a narrow field of view.
本文描述了一种基于软件的系统,用于使用存储的视频图像来离线跟踪眼睛和头部的运动,设计用于空中交通显示的研究。这些显示器通常是密集的信息;为了解决研究问题,我们希望能够将目光定位于一行文本中的单个单词(几分钟弧线),同时允许受试者自由移动。在头部运动的情况下,准确的注视跟踪需要高精度的头部跟踪,这是通过对来自窄视场的前视场景相机的图像进行配准来完成的。
{"title":"A software-based eye tracking system for the study of air-traffic displays","authors":"J. Mulligan","doi":"10.1145/507072.507087","DOIUrl":"https://doi.org/10.1145/507072.507087","url":null,"abstract":"This paper describes a software-based system for offline tracking of eye and head movements using stored video images, designed for use in the study of air-traffic displays. These displays are typically dense with information; to address the research questions, we wish to be able to localize gaze within a single word within a line of text (a few minutes of arc), while at the same time allowing some freedom of movement to the subject. Accurate gaze tracking in the presence of head movements requires high precision head tracking, and this was accomplished by registration of images from a forward-looking scene camera with a narrow field of view.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114659559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Real-time eye detection and tracking under various light conditions 各种光照条件下的实时眼球检测和跟踪
Pub Date : 2002-03-25 DOI: 10.1145/507072.507100
Zhiwei Zhu, K. Fujimura, Q. Ji
Non-intrusive methods based on active remote IR illumination for eye tracking are important for many applications of vision-based man-machine interaction. One problem that has plagued those methods is their sensitivity to lighting condition change. This tends to significantly limit their scope of application. In this paper, we present a new real-time eye detection and tracking methodology that works under variable and realistic lighting conditions. Based on combining the bright-pupil effect resulted from IR light and the conventional appearance-based object recognition technique, our method can robustly track eyes when the pupils are not very bright due to significant external illumination interferences. The appearance model is incorporated in both eyes detection and tracking via the use of support vector machine and the mean shift tracking. Additional improvement is achieved from modifying the image acquisition apparatus including the illuminator and the camera.
基于主动远程红外照明的非侵入式眼动追踪方法对于许多基于视觉的人机交互应用具有重要意义。困扰这些方法的一个问题是它们对光照条件变化的敏感性。这往往大大限制了它们的应用范围。在本文中,我们提出了一种新的实时眼睛检测和跟踪方法,该方法可以在可变和真实的照明条件下工作。该方法将红外光产生的明亮瞳孔效应与传统的基于外观的目标识别技术相结合,可以在瞳孔因外界光照干扰而不太亮时对眼睛进行鲁棒跟踪。通过使用支持向量机和均值偏移跟踪,将外观模型结合到眼睛检测和跟踪中。通过修改包括照明器和相机的图像采集装置实现了额外的改进。
{"title":"Real-time eye detection and tracking under various light conditions","authors":"Zhiwei Zhu, K. Fujimura, Q. Ji","doi":"10.1145/507072.507100","DOIUrl":"https://doi.org/10.1145/507072.507100","url":null,"abstract":"Non-intrusive methods based on active remote IR illumination for eye tracking are important for many applications of vision-based man-machine interaction. One problem that has plagued those methods is their sensitivity to lighting condition change. This tends to significantly limit their scope of application. In this paper, we present a new real-time eye detection and tracking methodology that works under variable and realistic lighting conditions. Based on combining the bright-pupil effect resulted from IR light and the conventional appearance-based object recognition technique, our method can robustly track eyes when the pupils are not very bright due to significant external illumination interferences. The appearance model is incorporated in both eyes detection and tracking via the use of support vector machine and the mean shift tracking. Additional improvement is achieved from modifying the image acquisition apparatus including the illuminator and the camera.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121113388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 162
Eye tracking in web search tasks: design implications 网络搜索任务中的眼动追踪:设计启示
Pub Date : 2002-03-25 DOI: 10.1145/507072.507082
J. Goldberg, M. Stimson, Marion Lewenstein, Neil Scott, A. Wichansky
An eye tracking study was conducted to evaluate specific design features for a prototype web portal application. This software serves independent web content through separate, rectangular, user-modifiable portlets on a web page. Each of seven participants navigated across multiple web pages while conducting six specific tasks, such as removing a link from a portlet. Specific experimental questions included (1) whether eye tracking-derived parameters were related to page sequence or user actions preceding page visits, (2) whether users were biased to traveling vertically or horizontally while viewing a web page, and (3) whether specific sub-features of portlets were visited in any particular order. Participants required 2-15 screens, and from 7-360+ seconds to complete each task. Based on analysis of screen sequences, there was little evidence that search became more directed as screen sequence increased. Navigation among portlets, when at least two columns exist, was biased towards horizontal search (across columns) as opposed to vertical search (within column). Within a portlet, the header bar was not reliably visited prior to the portlet's body, evidence that header bars are not reliably used for navigation cues. Initial design recommendations emphasized the need to place critical portlets on the left and top of the web portal area, and that related portlets do not need to appear in the same column. Further experimental replications are recommended to generalize these results to other applications.
一项眼动追踪研究被用于评估一个原型门户网站应用程序的具体设计特征。该软件通过网页上独立的、矩形的、用户可修改的portlet来提供独立的web内容。七个参与者中的每个人在执行六个特定任务(例如从portlet中删除链接)的同时浏览多个web页面。具体的实验问题包括:(1)眼动追踪衍生的参数是否与页面顺序或页面访问前的用户操作有关,(2)用户在浏览网页时是否倾向于垂直或水平移动,以及(3)是否以任何特定顺序访问portlet的特定子功能。参与者需要2-15个屏幕,完成每个任务的时间为7-360秒以上。根据对屏幕序列的分析,几乎没有证据表明随着屏幕序列的增加,搜索变得更有针对性。当至少存在两列时,portlet之间的导航倾向于水平搜索(跨列),而不是垂直搜索(列内)。在portlet中,在portlet主体之前不能可靠地访问标题栏,这证明标题栏不能可靠地用于导航提示。最初的设计建议强调需要将关键portlet放在web门户区域的左侧和顶部,而相关的portlet不需要出现在同一列中。建议进一步的实验重复将这些结果推广到其他应用。
{"title":"Eye tracking in web search tasks: design implications","authors":"J. Goldberg, M. Stimson, Marion Lewenstein, Neil Scott, A. Wichansky","doi":"10.1145/507072.507082","DOIUrl":"https://doi.org/10.1145/507072.507082","url":null,"abstract":"An eye tracking study was conducted to evaluate specific design features for a prototype web portal application. This software serves independent web content through separate, rectangular, user-modifiable portlets on a web page. Each of seven participants navigated across multiple web pages while conducting six specific tasks, such as removing a link from a portlet. Specific experimental questions included (1) whether eye tracking-derived parameters were related to page sequence or user actions preceding page visits, (2) whether users were biased to traveling vertically or horizontally while viewing a web page, and (3) whether specific sub-features of portlets were visited in any particular order. Participants required 2-15 screens, and from 7-360+ seconds to complete each task. Based on analysis of screen sequences, there was little evidence that search became more directed as screen sequence increased. Navigation among portlets, when at least two columns exist, was biased towards horizontal search (across columns) as opposed to vertical search (within column). Within a portlet, the header bar was not reliably visited prior to the portlet's body, evidence that header bars are not reliably used for navigation cues. Initial design recommendations emphasized the need to place critical portlets on the left and top of the web portal area, and that related portlets do not need to appear in the same column. Further experimental replications are recommended to generalize these results to other applications.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115416169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 462
What attracts the eye to the location of missed and reported breast cancers? 是什么吸引人们注意那些漏诊和报告的乳腺癌?
Pub Date : 2002-03-25 DOI: 10.1145/507072.507095
C. Mello-Thoms, C. Nodine, H. Kundel
The primary detector of breast cancer is the human eye, as it examines mammograms searching for signs of the disease. Nonetheless, it has been shown that 10-30% of all cancers in the breast are not reported by the radiologist, even though most of these are visible retrospectively. Studies of eye position have shown that the eye tends to dwell in the locations of both reported and not reported cancers, indicating that the problem is not faulty visual search, but rather, that is primarily related to perceptual and decision making mechanisms. In this paper we model the areas that attracted the radiologists' visual attention when reading mammograms and that yielded a decision by the radiologist, being this decision overt or covert. We contrast the characteristics of areas that contain cancers that were reported from the ones that contain cancers that, albeit attracting attention, did not reach an internal conspicuity threshold to be reported.
乳腺癌的主要探测器是人眼,因为它通过乳房x光检查寻找疾病的迹象。尽管如此,有证据表明,10-30%的乳腺癌没有被放射科医生报告,尽管其中大多数是回顾性观察到的。对眼睛位置的研究表明,眼睛倾向于停留在报告和未报告的癌症的位置,这表明问题不是错误的视觉搜索,而是主要与感知和决策机制有关。在本文中,我们模拟了在阅读乳房x光片时吸引放射科医生视觉注意力的区域,并产生了放射科医生的决定,这个决定是公开的还是隐蔽的。我们对比了已报道的包含癌症的区域的特征与包含癌症的区域的特征,尽管引起了注意,但没有达到要报道的内部显著阈值。
{"title":"What attracts the eye to the location of missed and reported breast cancers?","authors":"C. Mello-Thoms, C. Nodine, H. Kundel","doi":"10.1145/507072.507095","DOIUrl":"https://doi.org/10.1145/507072.507095","url":null,"abstract":"The primary detector of breast cancer is the human eye, as it examines mammograms searching for signs of the disease. Nonetheless, it has been shown that 10-30% of all cancers in the breast are not reported by the radiologist, even though most of these are visible retrospectively. Studies of eye position have shown that the eye tends to dwell in the locations of both reported and not reported cancers, indicating that the problem is not faulty visual search, but rather, that is primarily related to perceptual and decision making mechanisms. In this paper we model the areas that attracted the radiologists' visual attention when reading mammograms and that yielded a decision by the radiologist, being this decision overt or covert. We contrast the characteristics of areas that contain cancers that were reported from the ones that contain cancers that, albeit attracting attention, did not reach an internal conspicuity threshold to be reported.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122209971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
Designing attentive interfaces 设计细心的界面
Pub Date : 2002-03-25 DOI: 10.1145/507072.507077
Roel Vertegaal
In this paper, we propose a tentative framework for the classification of Attentive Interfaces, a new category of user interfaces. An Attentive Interface is a user interface that dynamically prioritizes the information it presents to its users, such that information processing resources of both user and system are optimally distributed across a set of tasks. The interface does this on the basis of knowledge --- consisting of a combination of measures and models --- of the past, present and future state of the user's attention, given the availability of system resources. We will show how the Attentive Interface provides a natural extension to the windowing paradigm found in Graphical User Interfaces. Our taxonomy of Attentive Interfaces allows us to identify classes of user interfaces that would benefit most from the ability to sense, model and optimize the user's attentive state. In particular, we show how systems that influence user workflow in concurrent task situations, such as those involved with management of multiparty communication, may benefit from such facilities.
在本文中,我们提出了一个尝试性的框架来分类细心界面,这是一个新的用户界面类别。细心界面是一种用户界面,它动态地对呈现给用户的信息进行优先级排序,从而使用户和系统的信息处理资源在一组任务中得到最佳分配。界面在给定系统资源可用性的情况下,基于用户注意力的过去、现在和未来状态的知识(由度量和模型的组合组成)来完成此工作。我们将展示细心界面是如何为图形用户界面中的窗口范例提供自然扩展的。我们的关注界面分类法允许我们识别用户界面的类别,这些用户界面将从感知、建模和优化用户关注状态的能力中获益最多。特别是,我们展示了在并发任务情况下影响用户工作流的系统,例如涉及多方通信管理的系统,如何从这些设施中受益。
{"title":"Designing attentive interfaces","authors":"Roel Vertegaal","doi":"10.1145/507072.507077","DOIUrl":"https://doi.org/10.1145/507072.507077","url":null,"abstract":"In this paper, we propose a tentative framework for the classification of Attentive Interfaces, a new category of user interfaces. An Attentive Interface is a user interface that dynamically prioritizes the information it presents to its users, such that information processing resources of both user and system are optimally distributed across a set of tasks. The interface does this on the basis of knowledge --- consisting of a combination of measures and models --- of the past, present and future state of the user's attention, given the availability of system resources. We will show how the Attentive Interface provides a natural extension to the windowing paradigm found in Graphical User Interfaces. Our taxonomy of Attentive Interfaces allows us to identify classes of user interfaces that would benefit most from the ability to sense, model and optimize the user's attentive state. In particular, we show how systems that influence user workflow in concurrent task situations, such as those involved with management of multiparty communication, may benefit from such facilities.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124519045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 76
Reduced saliency of peripheral targets in gaze-contingent multi-resolutional displays: blended versus sharp boundary windows 在注视条件下的多分辨率显示器中降低外围目标的显著性:混合与锐利边界窗口
Pub Date : 2002-03-25 DOI: 10.1145/507072.507091
E. Reingold, Lester C. Loschky
Gaze-contingent multi-resolutional displays (GCMRDs) have been proposed to solve the processing and bandwidth bottleneck in many single-user displays, by dynamically placing high-resolution in a window at the center of gaze, with lower resolution everywhere else. GCMRDs are also useful for investigating the perceptual processes involved in natural scene viewing. Several such studies suggest that potential saccade targets in degraded regions are less salient than those in the high-resolution window. Consistent with this, Reingold, Loschky, Stampe and Shen [2001b] found longer initial saccadic latencies to a salient peripheral target in conditions with a high-resolution window and degraded surround than in an all low-pass filtered no-window condition. Nevertheless, these results may have been due to parafoveal load caused by saliency of the boundary between the high- and low-resolution areas. The current study extends Reingold, et al. [2001b] by comparing both sharp- and blended-resolution boundary conditions with an all low-resolution no-window condition. The results replicate the previous findings [Reingold et al. 2001b] but indicate that the effect is unaltered by the type of window boundary (sharp or blended). This rules out the parafoveal load hypothesis, while further supporting the hypothesis that potential saccade targets in the degraded region are less salient than those in the high-resolution region.
基于注视的多分辨率显示器(gcmrd)已经被提出,通过在注视中心的窗口中动态地放置高分辨率,而在其他地方放置较低分辨率,来解决许多单用户显示器的处理和带宽瓶颈。gcmrd对于研究自然场景观看过程中的感知过程也很有用。一些这样的研究表明,退化区域的潜在扫视目标不如高分辨率窗口中的目标突出。与此一致的是,Reingold, Loschky, Stampe和Shen [2001b]发现,在高分辨率窗口和退化环绕的条件下,与全低通滤波的无窗口条件下相比,对显著外围目标的初始扫视延迟更长。然而,这些结果可能是由于高分辨率和低分辨率区域之间边界的显著性引起的中凹旁载荷。当前的研究扩展了Reingold等人[2001b]的研究,将锐利分辨率和混合分辨率边界条件与全低分辨率无窗口条件进行了比较。结果重复了先前的发现[Reingold et al. 2001b],但表明该效应不受窗口边界类型(尖锐或混合)的影响。这排除了旁中央凹负荷假说,同时进一步支持了退化区域的潜在扫视目标比高分辨率区域的目标更不显著的假设。
{"title":"Reduced saliency of peripheral targets in gaze-contingent multi-resolutional displays: blended versus sharp boundary windows","authors":"E. Reingold, Lester C. Loschky","doi":"10.1145/507072.507091","DOIUrl":"https://doi.org/10.1145/507072.507091","url":null,"abstract":"Gaze-contingent multi-resolutional displays (GCMRDs) have been proposed to solve the processing and bandwidth bottleneck in many single-user displays, by dynamically placing high-resolution in a window at the center of gaze, with lower resolution everywhere else. GCMRDs are also useful for investigating the perceptual processes involved in natural scene viewing. Several such studies suggest that potential saccade targets in degraded regions are less salient than those in the high-resolution window. Consistent with this, Reingold, Loschky, Stampe and Shen [2001b] found longer initial saccadic latencies to a salient peripheral target in conditions with a high-resolution window and degraded surround than in an all low-pass filtered no-window condition. Nevertheless, these results may have been due to parafoveal load caused by saliency of the boundary between the high- and low-resolution areas. The current study extends Reingold, et al. [2001b] by comparing both sharp- and blended-resolution boundary conditions with an all low-resolution no-window condition. The results replicate the previous findings [Reingold et al. 2001b] but indicate that the effect is unaltered by the type of window boundary (sharp or blended). This rules out the parafoveal load hypothesis, while further supporting the hypothesis that potential saccade targets in the degraded region are less salient than those in the high-resolution region.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124566990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
FreeGaze: a gaze tracking system for everyday gaze interaction FreeGaze:用于日常凝视交互的凝视跟踪系统
Pub Date : 2002-03-25 DOI: 10.1145/507072.507098
Takehiko Ohno, N. Mukawa, A. Yoshikawa
In this paper we introduce a novel gaze tracking system called FreeGaze, which is designed for the use of everyday gaze interaction. Among various possible applications of gaze tracking system, Human-Computer Interaction (HCI) is one of the most promising elds. However, existing systems require complicated and burden-some calibration and are not robust to the measurement variations. To solve these problems, we introduce a geometric eyeball model and sophisticated image processing. Unlike existing systems, our system needs only two points for each individual calibration. When the personalization nishes, our system needs no more calibration before each measurement session. Evaluation tests show that the system is accurate and applicable to everyday use for the applications.
本文介绍了一种新颖的注视跟踪系统FreeGaze,该系统是为日常注视交互而设计的。在注视跟踪系统的各种可能应用中,人机交互(HCI)是最有前途的领域之一。然而,现有的系统需要复杂和繁重的校准,并且对测量变化的鲁棒性不强。为了解决这些问题,我们引入了几何眼球模型和复杂的图像处理。与现有系统不同,我们的系统每次校准只需要两个点。当个性化完成后,我们的系统在每次测量之前不需要更多的校准。测试结果表明,该系统具有较好的准确性和较好的实用性。
{"title":"FreeGaze: a gaze tracking system for everyday gaze interaction","authors":"Takehiko Ohno, N. Mukawa, A. Yoshikawa","doi":"10.1145/507072.507098","DOIUrl":"https://doi.org/10.1145/507072.507098","url":null,"abstract":"In this paper we introduce a novel gaze tracking system called FreeGaze, which is designed for the use of everyday gaze interaction. Among various possible applications of gaze tracking system, Human-Computer Interaction (HCI) is one of the most promising elds. However, existing systems require complicated and burden-some calibration and are not robust to the measurement variations. To solve these problems, we introduce a geometric eyeball model and sophisticated image processing. Unlike existing systems, our system needs only two points for each individual calibration. When the personalization nishes, our system needs no more calibration before each measurement session. Evaluation tests show that the system is accurate and applicable to everyday use for the applications.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128915639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 255
期刊
Eye Tracking Research & Application
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1