Characterizing the location and extent of a viewer's interest, in terms of eye movement recordings, informs a range of investigations in image and scene viewing. We present an automatic data-driven method for accomplishing this, which clusters visual point-of-regard (POR) measurements into gazes and regions-of-interest using the mean shift procedure. Clusters produced using this method form a structured representation of viewer interest, and at the same time are replicable and not heavily influenced by noise or outliers. Thus, they are useful in answering fine-grained questions about where and how a viewer examined an image.
{"title":"Robust clustering of eye movement recordings for quantification of visual interest","authors":"A. Santella, D. DeCarlo","doi":"10.1145/968363.968368","DOIUrl":"https://doi.org/10.1145/968363.968368","url":null,"abstract":"Characterizing the location and extent of a viewer's interest, in terms of eye movement recordings, informs a range of investigations in image and scene viewing. We present an automatic data-driven method for accomplishing this, which clusters visual point-of-regard (POR) measurements into gazes and regions-of-interest using the mean shift procedure. Clusters produced using this method form a structured representation of viewer interest, and at the same time are replicable and not heavily influenced by noise or outliers. Thus, they are useful in answering fine-grained questions about where and how a viewer examined an image.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126988374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human eye gaze is a strong candidate to create a new application area based on human-computer interaction. To implement a really practical gaze-based interaction system, gaze detection must be realized without placing any restriction on the user's behavior or comfort. This paper describes a gaze tracking system that offers freehead, simple personal calibration. It does not require the user wear anything on her head, and she can move her head freely. Personal calibration takes only a very short time; the user is asked to look at two markers on the screen. An experiment shows that the accuracy of the implemented system is about 1.0 degrees (view angle).
{"title":"A free-head, simple calibration, gaze tracking system that enables gaze-based interaction","authors":"Takehiko Ohno, N. Mukawa","doi":"10.1145/968363.968387","DOIUrl":"https://doi.org/10.1145/968363.968387","url":null,"abstract":"Human eye gaze is a strong candidate to create a new application area based on human-computer interaction. To implement a really practical gaze-based interaction system, gaze detection must be realized without placing any restriction on the user's behavior or comfort. This paper describes a gaze tracking system that offers freehead, simple personal calibration. It does not require the user wear anything on her head, and she can move her head freely. Personal calibration takes only a very short time; the user is asked to look at two markers on the screen. An experiment shows that the accuracy of the implemented system is about 1.0 degrees (view angle).","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"225 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131425634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. P. Hansen, K. Tørning, A. Johansen, K. Itoh, Hirotaka Aoki
This paper investigates the usability of gaze-typing systems for disabled people in a broad perspective that takes into account the usage scenarios and the particular users that these systems benefit. Design goals for a gaze-typing system are identified: productivity above 25 words per minute, robust tracking, high availability, and support of multimodal input. A detailed investigation of the efficiency and user satisfaction with a Danish and a Japanese gaze-typing system compares it to head- and mouse (hand) - typing. We found gaze typing to be more erroneous than the other two modalities. Gaze typing was just as fast as head typing, and both were slower than mouse (hand-) typing. Possibilities for design improvements are discussed.
{"title":"Gaze typing compared with input by head and hand","authors":"J. P. Hansen, K. Tørning, A. Johansen, K. Itoh, Hirotaka Aoki","doi":"10.1145/968363.968389","DOIUrl":"https://doi.org/10.1145/968363.968389","url":null,"abstract":"This paper investigates the usability of gaze-typing systems for disabled people in a broad perspective that takes into account the usage scenarios and the particular users that these systems benefit. Design goals for a gaze-typing system are identified: productivity above 25 words per minute, robust tracking, high availability, and support of multimodal input. A detailed investigation of the efficiency and user satisfaction with a Danish and a Japanese gaze-typing system compares it to head- and mouse (hand) - typing. We found gaze typing to be more erroneous than the other two modalities. Gaze typing was just as fast as head typing, and both were slower than mouse (hand-) typing. Possibilities for design improvements are discussed.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130452125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eyetracking systems that use video-based cameras to monitor the eye and scene can be made significantly smaller thanks to tiny micro-lens video cameras. Pupil detection algorithms are generally implemented in hardware, allowing for real-time eyetracking. However, it is likely that real-time eyetracking will soon be fully accomplished in software alone. This paper encourages an "open-source" approach to eyetracking by providing practical tips on building a lightweight eyetracker from commercially available micro-lens cameras and other parts. While the headgear described here can be used with any dark-pupil eyetracking controller, it also opens the door to open-source software solutions that could be developed by the eyetracking and image-processing communities. Such systems could be optimized without concern for real-time performance because the systems could be run offline.
{"title":"Building a lightweight eyetracking headgear","authors":"J. Babcock, J. Pelz","doi":"10.1145/968363.968386","DOIUrl":"https://doi.org/10.1145/968363.968386","url":null,"abstract":"Eyetracking systems that use video-based cameras to monitor the eye and scene can be made significantly smaller thanks to tiny micro-lens video cameras. Pupil detection algorithms are generally implemented in hardware, allowing for real-time eyetracking. However, it is likely that real-time eyetracking will soon be fully accomplished in software alone. This paper encourages an \"open-source\" approach to eyetracking by providing practical tips on building a lightweight eyetracker from commercially available micro-lens cameras and other parts. While the headgear described here can be used with any dark-pupil eyetracking controller, it also opens the door to open-source software solutions that could be developed by the eyetracking and image-processing communities. Such systems could be optimized without concern for real-time performance because the systems could be run offline.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"3 Suppl 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125691184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benjamin Law, S. Fraser, Stella Atkins, A. Kirkpatrick, A. Lomax, C. MacKenzie
Visual information is important in surgeons' manipulative performance especially in laparoscopic surgery where tactual feedback is less than in open surgery. The study of surgeons' eye movements is an innovative way of assessing skill, in that a comparison of the eye movement strategies between expert surgeons and novices may show important differences that could be used in training. We conducted a preliminary study comparing the eye movements of 5 experts and 5 novices performing a one-handed aiming task on a computer-based laparoscopic surgery simulator. The performance results showed that experts were quicker and generally committed fewer errors than novices. We investigated eye movements as a possible factor for experts performing better than novices. The results from eye gaze analysis showed that novices needed more visual feedback of the tool position to complete the task than did experts. In addition, the experts tended to maintain eye gaze on the target while manipulating the tool, whereas novices were more varied in their behaviours. For example, we found that on some trials, novices tracked the movement of the tool until it reached the target.
{"title":"Eye gaze patterns differentiate novice and experts in a virtual laparoscopic surgery training environment","authors":"Benjamin Law, S. Fraser, Stella Atkins, A. Kirkpatrick, A. Lomax, C. MacKenzie","doi":"10.1145/968363.968370","DOIUrl":"https://doi.org/10.1145/968363.968370","url":null,"abstract":"Visual information is important in surgeons' manipulative performance especially in laparoscopic surgery where tactual feedback is less than in open surgery. The study of surgeons' eye movements is an innovative way of assessing skill, in that a comparison of the eye movement strategies between expert surgeons and novices may show important differences that could be used in training. We conducted a preliminary study comparing the eye movements of 5 experts and 5 novices performing a one-handed aiming task on a computer-based laparoscopic surgery simulator. The performance results showed that experts were quicker and generally committed fewer errors than novices. We investigated eye movements as a possible factor for experts performing better than novices. The results from eye gaze analysis showed that novices needed more visual feedback of the tool position to complete the task than did experts. In addition, the experts tended to maintain eye gaze on the target while manipulating the tool, whereas novices were more varied in their behaviours. For example, we found that on some trials, novices tracked the movement of the tool until it reached the target.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116849001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Duchowski, Nathan Cournia, Brian Cumming, Daniel McCallum, A. Gramopadhye, J. Greenstein, Sajay Sadasivan, R. Tyrrell
This paper evaluates the use of Visual Deictic Reference (VDR) in Collaborative Virtual Environments (CVEs). A simple CVE capable of hosting two (or more) participants simultaneously immersed in the same virtual environment is used as the testbed. One participant's VDR, obtained by tracking the participant's gaze, is projected to co-participants' environments in real-time as a colored lightspot. We compare the VDR lightspot when it is eye-slaved to when it is head-slaved and show that an eye-slaved VDR helps disambiguate the deictic point of reference, especially during conditions when the user's line of sight is decoupled from their head direction.
{"title":"Visual deictic reference in a collaborative virtual environment","authors":"A. Duchowski, Nathan Cournia, Brian Cumming, Daniel McCallum, A. Gramopadhye, J. Greenstein, Sajay Sadasivan, R. Tyrrell","doi":"10.1145/968363.968369","DOIUrl":"https://doi.org/10.1145/968363.968369","url":null,"abstract":"This paper evaluates the use of Visual Deictic Reference (VDR) in Collaborative Virtual Environments (CVEs). A simple CVE capable of hosting two (or more) participants simultaneously immersed in the same virtual environment is used as the testbed. One participant's VDR, obtained by tracking the participant's gaze, is projected to co-participants' environments in real-time as a colored lightspot. We compare the VDR lightspot when it is eye-slaved to when it is head-slaved and show that an eye-slaved VDR helps disambiguate the deictic point of reference, especially during conditions when the user's line of sight is decoupled from their head direction.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127252926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the study of eye movements in natural tasks, where subjects are able to freely move in their environment, it is desirable to capture a video of the surroundings of the subject not limited to a small field of view as obtained by the scene camera of an eye tracker. Moreover, recovering the head movements could give additional information about the type of eye movement that was carried out, the overall gaze change in world coordinates, and insight into high-order perceptual strategies. Algorithms for the classification of eye movements in such natural tasks could also benefit form the additional head movement data.We propose to use an omnidirectional vision sensor consisting of a small CCD video camera and a hyperbolic mirror. The camera is mounted on an ASL eye tracker and records an image sequence at 60 Hz. Several algorithms for the extraction of rotational motion from this image sequence were implemented and compared in their performance against the measurements of a Fasttrack magnetic tracking system. Using data from the eye tracker together with the data obtained by the omnidirectional image sensor, a new algorithm for the classification of different types of eye movements based on a Hidden-Markov-Model was developed.
{"title":"Head movement estimation for wearable eye tracker","authors":"C. Rothkopf, J. Pelz","doi":"10.1145/968363.968388","DOIUrl":"https://doi.org/10.1145/968363.968388","url":null,"abstract":"In the study of eye movements in natural tasks, where subjects are able to freely move in their environment, it is desirable to capture a video of the surroundings of the subject not limited to a small field of view as obtained by the scene camera of an eye tracker. Moreover, recovering the head movements could give additional information about the type of eye movement that was carried out, the overall gaze change in world coordinates, and insight into high-order perceptual strategies. Algorithms for the classification of eye movements in such natural tasks could also benefit form the additional head movement data.We propose to use an omnidirectional vision sensor consisting of a small CCD video camera and a hyperbolic mirror. The camera is mounted on an ASL eye tracker and records an image sequence at 60 Hz. Several algorithms for the extraction of rotational motion from this image sequence were implemented and compared in their performance against the measurements of a Fasttrack magnetic tracking system. Using data from the eye tracker together with the data obtained by the omnidirectional image sensor, a new algorithm for the classification of different types of eye movements based on a Hidden-Markov-Model was developed.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133754634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeffrey S. Shell, Roel Vertegaal, D. Cheng, Alexander W. Skaburskis, Changuk Sohn, A. James Stewart, Omar Aoudeh, C. Dickie
We present ECSGlasses: wearable eye contact sensing glasses that detect human eye contact. ECSGlasses report eye contact to digital devices, appliances and EyePliances in the user's attention space. Devices use this attentional cue to engage in a more sociable process of turn taking with users. This has the potential to reduce inappropriate intrusions, and limit their disruptiveness. We describe new prototype systems, including the Attentive Messaging Service (AMS), the Attentive Hit Counter, the first person attentive camcorder eyeBlog, and an updated Attentive Cell Phone. We also discuss the potential of these devices to open new windows of interaction using attention as a communication modality. Further, we present a novel signal-encoding scheme to uniquely identify EyePliances and users wearing ECSGlasses in multiparty scenarios.
{"title":"ECSGlasses and EyePliances: using attention to open sociable windows of interaction","authors":"Jeffrey S. Shell, Roel Vertegaal, D. Cheng, Alexander W. Skaburskis, Changuk Sohn, A. James Stewart, Omar Aoudeh, C. Dickie","doi":"10.1145/968363.968384","DOIUrl":"https://doi.org/10.1145/968363.968384","url":null,"abstract":"We present ECSGlasses: wearable eye contact sensing glasses that detect human eye contact. ECSGlasses report eye contact to digital devices, appliances and EyePliances in the user's attention space. Devices use this attentional cue to engage in a more sociable process of turn taking with users. This has the potential to reduce inappropriate intrusions, and limit their disruptiveness. We describe new prototype systems, including the Attentive Messaging Service (AMS), the Attentive Hit Counter, the first person attentive camcorder eyeBlog, and an updated Attentive Cell Phone. We also discuss the potential of these devices to open new windows of interaction using attention as a communication modality. Further, we present a novel signal-encoding scheme to uniquely identify EyePliances and users wearing ECSGlasses in multiparty scenarios.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133217796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Ramloll, C. Trepagnier, M. Sebrechts, A. Finkelmeyer
This paper documents the engineering of a gaze contingent therapeutic environment for the exploration and validation of a proposed rehabilitative technique addressing attention deficits in 24 to 54 months old autistic subjects. It discusses the current state of progress and lessons learnt so far while highlighting the outstanding engineering challenges of this project. We focus on calibration issues for this target group of users, explain the architecture of the system and present our general workflow for the construction of the gaze contingent environment. While this work is being undertaken for therapeutic purposes, it is likely to be relevant to the construction of gaze contingent displays for entertainment.
{"title":"A gaze contingent environment for fostering social attention in autistic children","authors":"R. Ramloll, C. Trepagnier, M. Sebrechts, A. Finkelmeyer","doi":"10.1145/968363.968367","DOIUrl":"https://doi.org/10.1145/968363.968367","url":null,"abstract":"This paper documents the engineering of a gaze contingent therapeutic environment for the exploration and validation of a proposed rehabilitative technique addressing attention deficits in 24 to 54 months old autistic subjects. It discusses the current state of progress and lessons learnt so far while highlighting the outstanding engineering challenges of this project. We focus on calibration issues for this target group of users, explain the architecture of the system and present our general workflow for the construction of the gaze contingent environment. While this work is being undertaken for therapeutic purposes, it is likely to be relevant to the construction of gaze contingent displays for entertainment.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115306599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}