J. Triesch, Brian T. Sullivan, M. Hayhoe, D. Ballard
We are interested in saccade contingent scene updates where the visual information presented in a display is altered while a saccadic eye movement of an unconstrained, freely moving observer is in progress. Since saccades typically last only several tens of milliseconds depending on their size, this poses dif cult constraints on the latency of detection. We have integrated two complementary eye trackers in a virtual reality helmet to simultaneously 1) detect saccade onsets with very low latency and 2) track the gaze with high precision albeit higher latency. In a series of experiments we demonstrate the system s capability of detecting saccade onsets with suf ciently low latency to make scene changes while a saccade is still progressing. While the method was developed to facilitate studies of human visual perception and attention, it may nd interesting applications in human-computer interaction and computer graphics.
{"title":"Saccade contingent updating in virtual reality","authors":"J. Triesch, Brian T. Sullivan, M. Hayhoe, D. Ballard","doi":"10.1145/507072.507092","DOIUrl":"https://doi.org/10.1145/507072.507092","url":null,"abstract":"We are interested in saccade contingent scene updates where the visual information presented in a display is altered while a saccadic eye movement of an unconstrained, freely moving observer is in progress. Since saccades typically last only several tens of milliseconds depending on their size, this poses dif cult constraints on the latency of detection. We have integrated two complementary eye trackers in a virtual reality helmet to simultaneously 1) detect saccade onsets with very low latency and 2) track the gaze with high precision albeit higher latency. In a series of experiments we demonstrate the system s capability of detecting saccade onsets with suf ciently low latency to make scene changes while a saccade is still progressing. While the method was developed to facilitate studies of human visual perception and attention, it may nd interesting applications in human-computer interaction and computer graphics.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127278928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Nguyen, Cindy Wagner, David B. Koons, M. Flickner
In this paper, we describe experiments conducted to explain observed differences in the bright pupil response of human eyes. Many people observe the bright pupil response as the red-eye effect when taking flash photography. However, there is significant variation in the magnitude of the bright pupil response across the population. Since many commercial gaze-tracking systems use the infrared bright pupil response for eye detection, a clear understanding of the magnitude and cause of the bright pupil variation gives critical insight into the robustness of gaze tracking systems. This paper documents studies we have conducted to measure the bright pupil differences using infrared light and hypothesis factors that lead to these differences.
{"title":"Differences in the infrared bright pupil response of human eyes","authors":"K. Nguyen, Cindy Wagner, David B. Koons, M. Flickner","doi":"10.1145/507072.507099","DOIUrl":"https://doi.org/10.1145/507072.507099","url":null,"abstract":"In this paper, we describe experiments conducted to explain observed differences in the bright pupil response of human eyes. Many people observe the bright pupil response as the red-eye effect when taking flash photography. However, there is significant variation in the magnitude of the bright pupil response across the population. Since many commercial gaze-tracking systems use the infrared bright pupil response for eye detection, a clear understanding of the magnitude and cause of the bright pupil variation gives critical insight into the robustness of gaze tracking systems. This paper documents studies we have conducted to measure the bright pupil differences using infrared light and hypothesis factors that lead to these differences.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128215330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, there has been a resurgence of interest in the use of eye tracking systems for interactive purposes. However, it is easy to be fooled by the interactive power of eye tracking. When first encountering eye based interaction, most people are genuinely impressed with the almost magical window into the mind of the user that it seems to provide. There are two reasons why this belief may lead to subsequent disappointment. Firstly, although current eye tracking equipment is far superior to that used in the seventies and early eighties, it is by no means perfect. For example, there is still the tradeoff between the use of an obtrusive head-based system or a desk-based system with limited head movement. Such technical problems continue to limit the usefulness of eye tracking as a generic form of input. Secondly, there are real methodological problems regarding the interpretation of eye input for use in graphical user interfaces. One example, the "Midas Touch" problem, is observed in systems that use eye movements to directly control a mouse cursor. When does the system decide that a user is interested in a visual object? Systems that implement dwell time for this purpose run the risk of disallowing visual scanning behavior, requiring users to control their eye movements for the purposes of output, rather than input. However, difficulties in the interpretation of visual interest remain even when systems use another input modality for signaling intent. Another classic methodological problem is exemplified by the application of eye movement recording in usability studies. Although eye fixations provide some of the best measures of visual interest, they do not provide a measure of cognitive interest. It is one thing to determine whether a user has observed certain visual information, but quite another to determine whether this information has in fact been processed or understood. Some of our technological problems can and will be solved. However, we believe that our methodological issues point to a more fundamental problem: What is the nature of the input information conveyed by eye movements and to what interactive functions can this information provide added value?
{"title":"What do the eyes behold for human-computer interaction?","authors":"Roel Vertegaal","doi":"10.1145/507072.507084","DOIUrl":"https://doi.org/10.1145/507072.507084","url":null,"abstract":"In recent years, there has been a resurgence of interest in the use of eye tracking systems for interactive purposes. However, it is easy to be fooled by the interactive power of eye tracking. When first encountering eye based interaction, most people are genuinely impressed with the almost magical window into the mind of the user that it seems to provide. There are two reasons why this belief may lead to subsequent disappointment. Firstly, although current eye tracking equipment is far superior to that used in the seventies and early eighties, it is by no means perfect. For example, there is still the tradeoff between the use of an obtrusive head-based system or a desk-based system with limited head movement. Such technical problems continue to limit the usefulness of eye tracking as a generic form of input. Secondly, there are real methodological problems regarding the interpretation of eye input for use in graphical user interfaces. One example, the \"Midas Touch\" problem, is observed in systems that use eye movements to directly control a mouse cursor. When does the system decide that a user is interested in a visual object? Systems that implement dwell time for this purpose run the risk of disallowing visual scanning behavior, requiring users to control their eye movements for the purposes of output, rather than input. However, difficulties in the interpretation of visual interest remain even when systems use another input modality for signaling intent. Another classic methodological problem is exemplified by the application of eye movement recording in usability studies. Although eye fixations provide some of the best measures of visual interest, they do not provide a measure of cognitive interest. It is one thing to determine whether a user has observed certain visual information, but quite another to determine whether this information has in fact been processed or understood. Some of our technological problems can and will be solved. However, we believe that our methodological issues point to a more fundamental problem: What is the nature of the input information conveyed by eye movements and to what interactive functions can this information provide added value?","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133428265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes a software-based system for offline tracking of eye and head movements using stored video images, designed for use in the study of air-traffic displays. These displays are typically dense with information; to address the research questions, we wish to be able to localize gaze within a single word within a line of text (a few minutes of arc), while at the same time allowing some freedom of movement to the subject. Accurate gaze tracking in the presence of head movements requires high precision head tracking, and this was accomplished by registration of images from a forward-looking scene camera with a narrow field of view.
{"title":"A software-based eye tracking system for the study of air-traffic displays","authors":"J. Mulligan","doi":"10.1145/507072.507087","DOIUrl":"https://doi.org/10.1145/507072.507087","url":null,"abstract":"This paper describes a software-based system for offline tracking of eye and head movements using stored video images, designed for use in the study of air-traffic displays. These displays are typically dense with information; to address the research questions, we wish to be able to localize gaze within a single word within a line of text (a few minutes of arc), while at the same time allowing some freedom of movement to the subject. Accurate gaze tracking in the presence of head movements requires high precision head tracking, and this was accomplished by registration of images from a forward-looking scene camera with a narrow field of view.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114659559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-intrusive methods based on active remote IR illumination for eye tracking are important for many applications of vision-based man-machine interaction. One problem that has plagued those methods is their sensitivity to lighting condition change. This tends to significantly limit their scope of application. In this paper, we present a new real-time eye detection and tracking methodology that works under variable and realistic lighting conditions. Based on combining the bright-pupil effect resulted from IR light and the conventional appearance-based object recognition technique, our method can robustly track eyes when the pupils are not very bright due to significant external illumination interferences. The appearance model is incorporated in both eyes detection and tracking via the use of support vector machine and the mean shift tracking. Additional improvement is achieved from modifying the image acquisition apparatus including the illuminator and the camera.
{"title":"Real-time eye detection and tracking under various light conditions","authors":"Zhiwei Zhu, K. Fujimura, Q. Ji","doi":"10.1145/507072.507100","DOIUrl":"https://doi.org/10.1145/507072.507100","url":null,"abstract":"Non-intrusive methods based on active remote IR illumination for eye tracking are important for many applications of vision-based man-machine interaction. One problem that has plagued those methods is their sensitivity to lighting condition change. This tends to significantly limit their scope of application. In this paper, we present a new real-time eye detection and tracking methodology that works under variable and realistic lighting conditions. Based on combining the bright-pupil effect resulted from IR light and the conventional appearance-based object recognition technique, our method can robustly track eyes when the pupils are not very bright due to significant external illumination interferences. The appearance model is incorporated in both eyes detection and tracking via the use of support vector machine and the mean shift tracking. Additional improvement is achieved from modifying the image acquisition apparatus including the illuminator and the camera.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121113388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Goldberg, M. Stimson, Marion Lewenstein, Neil Scott, A. Wichansky
An eye tracking study was conducted to evaluate specific design features for a prototype web portal application. This software serves independent web content through separate, rectangular, user-modifiable portlets on a web page. Each of seven participants navigated across multiple web pages while conducting six specific tasks, such as removing a link from a portlet. Specific experimental questions included (1) whether eye tracking-derived parameters were related to page sequence or user actions preceding page visits, (2) whether users were biased to traveling vertically or horizontally while viewing a web page, and (3) whether specific sub-features of portlets were visited in any particular order. Participants required 2-15 screens, and from 7-360+ seconds to complete each task. Based on analysis of screen sequences, there was little evidence that search became more directed as screen sequence increased. Navigation among portlets, when at least two columns exist, was biased towards horizontal search (across columns) as opposed to vertical search (within column). Within a portlet, the header bar was not reliably visited prior to the portlet's body, evidence that header bars are not reliably used for navigation cues. Initial design recommendations emphasized the need to place critical portlets on the left and top of the web portal area, and that related portlets do not need to appear in the same column. Further experimental replications are recommended to generalize these results to other applications.
{"title":"Eye tracking in web search tasks: design implications","authors":"J. Goldberg, M. Stimson, Marion Lewenstein, Neil Scott, A. Wichansky","doi":"10.1145/507072.507082","DOIUrl":"https://doi.org/10.1145/507072.507082","url":null,"abstract":"An eye tracking study was conducted to evaluate specific design features for a prototype web portal application. This software serves independent web content through separate, rectangular, user-modifiable portlets on a web page. Each of seven participants navigated across multiple web pages while conducting six specific tasks, such as removing a link from a portlet. Specific experimental questions included (1) whether eye tracking-derived parameters were related to page sequence or user actions preceding page visits, (2) whether users were biased to traveling vertically or horizontally while viewing a web page, and (3) whether specific sub-features of portlets were visited in any particular order. Participants required 2-15 screens, and from 7-360+ seconds to complete each task. Based on analysis of screen sequences, there was little evidence that search became more directed as screen sequence increased. Navigation among portlets, when at least two columns exist, was biased towards horizontal search (across columns) as opposed to vertical search (within column). Within a portlet, the header bar was not reliably visited prior to the portlet's body, evidence that header bars are not reliably used for navigation cues. Initial design recommendations emphasized the need to place critical portlets on the left and top of the web portal area, and that related portlets do not need to appear in the same column. Further experimental replications are recommended to generalize these results to other applications.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115416169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The primary detector of breast cancer is the human eye, as it examines mammograms searching for signs of the disease. Nonetheless, it has been shown that 10-30% of all cancers in the breast are not reported by the radiologist, even though most of these are visible retrospectively. Studies of eye position have shown that the eye tends to dwell in the locations of both reported and not reported cancers, indicating that the problem is not faulty visual search, but rather, that is primarily related to perceptual and decision making mechanisms. In this paper we model the areas that attracted the radiologists' visual attention when reading mammograms and that yielded a decision by the radiologist, being this decision overt or covert. We contrast the characteristics of areas that contain cancers that were reported from the ones that contain cancers that, albeit attracting attention, did not reach an internal conspicuity threshold to be reported.
{"title":"What attracts the eye to the location of missed and reported breast cancers?","authors":"C. Mello-Thoms, C. Nodine, H. Kundel","doi":"10.1145/507072.507095","DOIUrl":"https://doi.org/10.1145/507072.507095","url":null,"abstract":"The primary detector of breast cancer is the human eye, as it examines mammograms searching for signs of the disease. Nonetheless, it has been shown that 10-30% of all cancers in the breast are not reported by the radiologist, even though most of these are visible retrospectively. Studies of eye position have shown that the eye tends to dwell in the locations of both reported and not reported cancers, indicating that the problem is not faulty visual search, but rather, that is primarily related to perceptual and decision making mechanisms. In this paper we model the areas that attracted the radiologists' visual attention when reading mammograms and that yielded a decision by the radiologist, being this decision overt or covert. We contrast the characteristics of areas that contain cancers that were reported from the ones that contain cancers that, albeit attracting attention, did not reach an internal conspicuity threshold to be reported.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122209971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a tentative framework for the classification of Attentive Interfaces, a new category of user interfaces. An Attentive Interface is a user interface that dynamically prioritizes the information it presents to its users, such that information processing resources of both user and system are optimally distributed across a set of tasks. The interface does this on the basis of knowledge --- consisting of a combination of measures and models --- of the past, present and future state of the user's attention, given the availability of system resources. We will show how the Attentive Interface provides a natural extension to the windowing paradigm found in Graphical User Interfaces. Our taxonomy of Attentive Interfaces allows us to identify classes of user interfaces that would benefit most from the ability to sense, model and optimize the user's attentive state. In particular, we show how systems that influence user workflow in concurrent task situations, such as those involved with management of multiparty communication, may benefit from such facilities.
{"title":"Designing attentive interfaces","authors":"Roel Vertegaal","doi":"10.1145/507072.507077","DOIUrl":"https://doi.org/10.1145/507072.507077","url":null,"abstract":"In this paper, we propose a tentative framework for the classification of Attentive Interfaces, a new category of user interfaces. An Attentive Interface is a user interface that dynamically prioritizes the information it presents to its users, such that information processing resources of both user and system are optimally distributed across a set of tasks. The interface does this on the basis of knowledge --- consisting of a combination of measures and models --- of the past, present and future state of the user's attention, given the availability of system resources. We will show how the Attentive Interface provides a natural extension to the windowing paradigm found in Graphical User Interfaces. Our taxonomy of Attentive Interfaces allows us to identify classes of user interfaces that would benefit most from the ability to sense, model and optimize the user's attentive state. In particular, we show how systems that influence user workflow in concurrent task situations, such as those involved with management of multiparty communication, may benefit from such facilities.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124519045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaze-contingent multi-resolutional displays (GCMRDs) have been proposed to solve the processing and bandwidth bottleneck in many single-user displays, by dynamically placing high-resolution in a window at the center of gaze, with lower resolution everywhere else. GCMRDs are also useful for investigating the perceptual processes involved in natural scene viewing. Several such studies suggest that potential saccade targets in degraded regions are less salient than those in the high-resolution window. Consistent with this, Reingold, Loschky, Stampe and Shen [2001b] found longer initial saccadic latencies to a salient peripheral target in conditions with a high-resolution window and degraded surround than in an all low-pass filtered no-window condition. Nevertheless, these results may have been due to parafoveal load caused by saliency of the boundary between the high- and low-resolution areas. The current study extends Reingold, et al. [2001b] by comparing both sharp- and blended-resolution boundary conditions with an all low-resolution no-window condition. The results replicate the previous findings [Reingold et al. 2001b] but indicate that the effect is unaltered by the type of window boundary (sharp or blended). This rules out the parafoveal load hypothesis, while further supporting the hypothesis that potential saccade targets in the degraded region are less salient than those in the high-resolution region.
基于注视的多分辨率显示器(gcmrd)已经被提出,通过在注视中心的窗口中动态地放置高分辨率,而在其他地方放置较低分辨率,来解决许多单用户显示器的处理和带宽瓶颈。gcmrd对于研究自然场景观看过程中的感知过程也很有用。一些这样的研究表明,退化区域的潜在扫视目标不如高分辨率窗口中的目标突出。与此一致的是,Reingold, Loschky, Stampe和Shen [2001b]发现,在高分辨率窗口和退化环绕的条件下,与全低通滤波的无窗口条件下相比,对显著外围目标的初始扫视延迟更长。然而,这些结果可能是由于高分辨率和低分辨率区域之间边界的显著性引起的中凹旁载荷。当前的研究扩展了Reingold等人[2001b]的研究,将锐利分辨率和混合分辨率边界条件与全低分辨率无窗口条件进行了比较。结果重复了先前的发现[Reingold et al. 2001b],但表明该效应不受窗口边界类型(尖锐或混合)的影响。这排除了旁中央凹负荷假说,同时进一步支持了退化区域的潜在扫视目标比高分辨率区域的目标更不显著的假设。
{"title":"Reduced saliency of peripheral targets in gaze-contingent multi-resolutional displays: blended versus sharp boundary windows","authors":"E. Reingold, Lester C. Loschky","doi":"10.1145/507072.507091","DOIUrl":"https://doi.org/10.1145/507072.507091","url":null,"abstract":"Gaze-contingent multi-resolutional displays (GCMRDs) have been proposed to solve the processing and bandwidth bottleneck in many single-user displays, by dynamically placing high-resolution in a window at the center of gaze, with lower resolution everywhere else. GCMRDs are also useful for investigating the perceptual processes involved in natural scene viewing. Several such studies suggest that potential saccade targets in degraded regions are less salient than those in the high-resolution window. Consistent with this, Reingold, Loschky, Stampe and Shen [2001b] found longer initial saccadic latencies to a salient peripheral target in conditions with a high-resolution window and degraded surround than in an all low-pass filtered no-window condition. Nevertheless, these results may have been due to parafoveal load caused by saliency of the boundary between the high- and low-resolution areas. The current study extends Reingold, et al. [2001b] by comparing both sharp- and blended-resolution boundary conditions with an all low-resolution no-window condition. The results replicate the previous findings [Reingold et al. 2001b] but indicate that the effect is unaltered by the type of window boundary (sharp or blended). This rules out the parafoveal load hypothesis, while further supporting the hypothesis that potential saccade targets in the degraded region are less salient than those in the high-resolution region.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124566990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we introduce a novel gaze tracking system called FreeGaze, which is designed for the use of everyday gaze interaction. Among various possible applications of gaze tracking system, Human-Computer Interaction (HCI) is one of the most promising elds. However, existing systems require complicated and burden-some calibration and are not robust to the measurement variations. To solve these problems, we introduce a geometric eyeball model and sophisticated image processing. Unlike existing systems, our system needs only two points for each individual calibration. When the personalization nishes, our system needs no more calibration before each measurement session. Evaluation tests show that the system is accurate and applicable to everyday use for the applications.
{"title":"FreeGaze: a gaze tracking system for everyday gaze interaction","authors":"Takehiko Ohno, N. Mukawa, A. Yoshikawa","doi":"10.1145/507072.507098","DOIUrl":"https://doi.org/10.1145/507072.507098","url":null,"abstract":"In this paper we introduce a novel gaze tracking system called FreeGaze, which is designed for the use of everyday gaze interaction. Among various possible applications of gaze tracking system, Human-Computer Interaction (HCI) is one of the most promising elds. However, existing systems require complicated and burden-some calibration and are not robust to the measurement variations. To solve these problems, we introduce a geometric eyeball model and sophisticated image processing. Unlike existing systems, our system needs only two points for each individual calibration. When the personalization nishes, our system needs no more calibration before each measurement session. Evaluation tests show that the system is accurate and applicable to everyday use for the applications.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128915639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}