We modelled gaze following behavior in a naturalistic virtual reality environment using Wiener-Granger causality. Using this method, gaze following was statistically tangible throughout the experiment, but could not easily be pinpointed to precise moments in time.
{"title":"Tracing gaze-following behavior in virtual reality using wiener-granger causality","authors":"Marius Rubo, M. Gamer","doi":"10.1145/3204493.3208332","DOIUrl":"https://doi.org/10.1145/3204493.3208332","url":null,"abstract":"We modelled gaze following behavior in a naturalistic virtual reality environment using Wiener-Granger causality. Using this method, gaze following was statistically tangible throughout the experiment, but could not easily be pinpointed to precise moments in time.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124501110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Video-based gaze tracking is prone to brightness changes due to their effects on pupil size. Monocular observations indeed confirm variable fixation locations depending on brightness. In close viewing, pupil size is coupled with accommodation and vergence, the so-called near triad. Hence, systematic changes in fixation disparity might be expected to co-occur with varying pupil size. In the current experiment, fixation disparity was assessed. Calibration was conducted either on dark or on bright background, and text had to be read on both backgrounds, on a self-illuminating screen and on paper. When calibration background matches background during reading, mean fixation disparity did not differ from zero. In the non-calibrated conditions, however, a brighter stimulus went along with a dominance of crossed fixations and vice versa. The data demonstrate that systematic changes in fixation disparity occur as effect of brightness changes advising for careful setting calibration parameters.
{"title":"Systematic shifts of fixation disparity accompanying brightness changes","authors":"A. Huckauf","doi":"10.1145/3204493.3204587","DOIUrl":"https://doi.org/10.1145/3204493.3204587","url":null,"abstract":"Video-based gaze tracking is prone to brightness changes due to their effects on pupil size. Monocular observations indeed confirm variable fixation locations depending on brightness. In close viewing, pupil size is coupled with accommodation and vergence, the so-called near triad. Hence, systematic changes in fixation disparity might be expected to co-occur with varying pupil size. In the current experiment, fixation disparity was assessed. Calibration was conducted either on dark or on bright background, and text had to be read on both backgrounds, on a self-illuminating screen and on paper. When calibration background matches background during reading, mean fixation disparity did not differ from zero. In the non-calibrated conditions, however, a brighter stimulus went along with a dominance of crossed fixations and vice versa. The data demonstrate that systematic changes in fixation disparity occur as effect of brightness changes advising for careful setting calibration parameters.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129453080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Individuals with autism spectrum disorder (ASD) have shown difficulties to integrate auditory and visual sensory modalities. Here we aim to explore if very young infants at genetic risk of ASD show atypicalities in this ability early in development. We registered visual attention of 4-month-old infants in a task using audiovisual natural stimuli (speaking faces). The complexity of this information and the attentional features of this population, among others, involves a great amount of challenges regarding data quality obtained with an eye-tracker. Here we discuss some of them and draw possible solutions.
{"title":"Eye-tracking measures in audiovisual stimuli in infants at high genetic risk for ASD: challenging issues","authors":"Itziar Lozano, R. Campos, M. Belinchón","doi":"10.1145/3204493.3207423","DOIUrl":"https://doi.org/10.1145/3204493.3207423","url":null,"abstract":"Individuals with autism spectrum disorder (ASD) have shown difficulties to integrate auditory and visual sensory modalities. Here we aim to explore if very young infants at genetic risk of ASD show atypicalities in this ability early in development. We registered visual attention of 4-month-old infants in a task using audiovisual natural stimuli (speaking faces). The complexity of this information and the attentional features of this population, among others, involves a great amount of challenges regarding data quality obtained with an eye-tracker. Here we discuss some of them and draw possible solutions.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133077032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","authors":"","doi":"10.1145/3204493","DOIUrl":"https://doi.org/10.1145/3204493","url":null,"abstract":"","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133436690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Grillini, Daniel Ombelet, R. S. Soans, F. Cornelissen
Perimetry---assessment of visual field defects (VFD)---requires patients to be able to maintain a prolonged stable fixation, as well as to provide feedback through motor response. These aspects limit the testable population and often lead to inaccurate results. We hypothesized that different VFD would alter the eye-movements in systematic ways, thus making it possible to infer the presence of VFD by quantifying the spatio-temporal properties of eye movements. We developed a tracking test to record participant's eye-movements while we simulated different gaze-contingent VFD. We tested 50 visually healthy participants and simulated three common scotomas: peripheral loss, central loss and hemifield loss. We quantified spatio-temporal features using cross-correlogram analysis, then applied cross-validation to train a decision tree algorithm to classify the conditions. Our test is faster and more comfortable than standard perimetry and can achieve a classifying accuracy of ∼90% (True Positive Rate = ∼98%) with data acquired in less than 2 minutes.
{"title":"Towards using the spatio-temporal properties of eye movements to classify visual field defects","authors":"A. Grillini, Daniel Ombelet, R. S. Soans, F. Cornelissen","doi":"10.1145/3204493.3204590","DOIUrl":"https://doi.org/10.1145/3204493.3204590","url":null,"abstract":"Perimetry---assessment of visual field defects (VFD)---requires patients to be able to maintain a prolonged stable fixation, as well as to provide feedback through motor response. These aspects limit the testable population and often lead to inaccurate results. We hypothesized that different VFD would alter the eye-movements in systematic ways, thus making it possible to infer the presence of VFD by quantifying the spatio-temporal properties of eye movements. We developed a tracking test to record participant's eye-movements while we simulated different gaze-contingent VFD. We tested 50 visually healthy participants and simulated three common scotomas: peripheral loss, central loss and hemifield loss. We quantified spatio-temporal features using cross-correlogram analysis, then applied cross-validation to train a decision tree algorithm to classify the conditions. Our test is faster and more comfortable than standard perimetry and can achieve a classifying accuracy of ∼90% (True Positive Rate = ∼98%) with data acquired in less than 2 minutes.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131920380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nelson Silva, T. Schreck, Eduardo Veas, V. Sabol, E. Eggeling, D. Fellner
We developed a new concept to improve the efficiency of visual analysis through visual recommendations. It uses a novel eye-gaze based recommendation model that aids users in identifying interesting time-series patterns. Our model combines time-series features and eye-gaze interests, captured via an eye-tracker. Mouse selections are also considered. The system provides an overlay visualization with recommended patterns, and an eye-history graph, that supports the users in the data exploration process. We conducted an experiment with 5 tasks where 30 participants explored sensor data of a wind turbine. This work presents results on pre-attentive features, and discusses the precision/recall of our model in comparison to final selections made by users. Our model helps users to efficiently identify interesting time-series patterns.
{"title":"Leveraging eye-gaze and time-series features to predict user interests and build a recommendation model for visual analysis","authors":"Nelson Silva, T. Schreck, Eduardo Veas, V. Sabol, E. Eggeling, D. Fellner","doi":"10.1145/3204493.3204546","DOIUrl":"https://doi.org/10.1145/3204493.3204546","url":null,"abstract":"We developed a new concept to improve the efficiency of visual analysis through visual recommendations. It uses a novel eye-gaze based recommendation model that aids users in identifying interesting time-series patterns. Our model combines time-series features and eye-gaze interests, captured via an eye-tracker. Mouse selections are also considered. The system provides an overlay visualization with recommended patterns, and an eye-history graph, that supports the users in the data exploration process. We conducted an experiment with 5 tasks where 30 participants explored sensor data of a wind turbine. This work presents results on pre-attentive features, and discusses the precision/recall of our model in comparison to final selections made by users. Our model helps users to efficiently identify interesting time-series patterns.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130368329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the challenges of mobile eye tracking is mapping gaze data on a reference image of the stimulus. Here we present a marker-tracking system that relies on the scene-video, recorded by eye tracking glasses, to recognize and track markers and map gaze data on the reference image. Due to the simple nature of the markers employed, the current system works with low-quality videos and at long distances from the stimulus, allowing the use of mobile eye tracking in new situations.
{"title":"Robust marker tracking system for mapping mobile eye tracking data","authors":"Iyad Aldaqre, Roberto Delfiore","doi":"10.1145/3204493.3208339","DOIUrl":"https://doi.org/10.1145/3204493.3208339","url":null,"abstract":"One of the challenges of mobile eye tracking is mapping gaze data on a reference image of the stimulus. Here we present a marker-tracking system that relies on the scene-video, recorded by eye tracking glasses, to recognize and track markers and map gaze data on the reference image. Due to the simple nature of the markers employed, the current system works with low-quality videos and at long distances from the stimulus, allowing the use of mobile eye tracking in new situations.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130979992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaze gesture-based interactions on a computer are promising, but the existing systems are limited by the number of supported gestures, recognition accuracy, need to remember the stroke order, lack of extensibility, and so on. We present a gaze gesture-based interaction framework where a user can design gestures and associate them to appropriate commands like minimize, maximize, scroll, and so on. This allows the user to interact with a wide range of applications using a common set of gestures. Furthermore, our gesture recognition algorithm is independent of the screen size, resolution, and the user can draw the gesture anywhere on the target application. Results from a user study involving seven participants showed that the system recognizes a set of nine gestures with an accuracy of 93% and a F-measure of 0.96. We envision, this framework can be leveraged in developing solutions for situational impairments, accessibility, and also for implementing rich a interaction paradigm.
{"title":"A gaze gesture-based paradigm for situational impairments, accessibility, and rich interactions","authors":"Vijay Rajanna, T. Hammond","doi":"10.1145/3204493.3208344","DOIUrl":"https://doi.org/10.1145/3204493.3208344","url":null,"abstract":"Gaze gesture-based interactions on a computer are promising, but the existing systems are limited by the number of supported gestures, recognition accuracy, need to remember the stroke order, lack of extensibility, and so on. We present a gaze gesture-based interaction framework where a user can design gestures and associate them to appropriate commands like minimize, maximize, scroll, and so on. This allows the user to interact with a wide range of applications using a common set of gestures. Furthermore, our gesture recognition algorithm is independent of the screen size, resolution, and the user can draw the gesture anywhere on the target application. Results from a user study involving seven participants showed that the system recognizes a set of nine gestures with an accuracy of 93% and a F-measure of 0.96. We envision, this framework can be leveraged in developing solutions for situational impairments, accessibility, and also for implementing rich a interaction paradigm.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132664465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Gizatdinova, O. Špakov, O. Tuisku, M. Turk, Veikko Surakka
With the proliferation of small-screen computing devices, there has been a continuous trend in reducing the size of interface elements. In virtual keyboards, this allows for more characters in a layout and additional function widgets. However, vision-based interfaces (VBIs) have only been investigated with large (e.g., full-screen) keyboards. To understand how key size reduction affects the accuracy and speed performance of text entry VBIs, we evaluated gaze-controlled VBI (g-VBI) and head-controlled VBI (h-VBI) with unconventionally small (0.4°, 0.6°, 0.8° and 1°) keys. Novices (N = 26) yielded significantly more accurate and fast text production with h-VBI than with g-VBI, while the performance of experts (N = 12) for both VBIs was nearly equal when a 0.8--1° key size was used. We discuss advantages and limitations of the VBIs for typing with ultra-small keyboards and emphasize relevant factors for designing such systems.
{"title":"Gaze and head pointing for hands-free text entry: applicability to ultra-small virtual keyboards","authors":"Y. Gizatdinova, O. Špakov, O. Tuisku, M. Turk, Veikko Surakka","doi":"10.1145/3204493.3204539","DOIUrl":"https://doi.org/10.1145/3204493.3204539","url":null,"abstract":"With the proliferation of small-screen computing devices, there has been a continuous trend in reducing the size of interface elements. In virtual keyboards, this allows for more characters in a layout and additional function widgets. However, vision-based interfaces (VBIs) have only been investigated with large (e.g., full-screen) keyboards. To understand how key size reduction affects the accuracy and speed performance of text entry VBIs, we evaluated gaze-controlled VBI (g-VBI) and head-controlled VBI (h-VBI) with unconventionally small (0.4°, 0.6°, 0.8° and 1°) keys. Novices (N = 26) yielded significantly more accurate and fast text production with h-VBI than with g-VBI, while the performance of experts (N = 12) for both VBIs was nearly equal when a 0.8--1° key size was used. We discuss advantages and limitations of the VBIs for typing with ultra-small keyboards and emphasize relevant factors for designing such systems.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121715378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pervasive eye-tracking applications such as gaze-based human computer interaction and advanced driver assistance require real-time, accurate, and robust pupil detection. However, automated pupil detection has proved to be an intricate task in real-world scenarios due to a large mixture of challenges - for instance, quickly changing illumination and occlusions. In this work, we introduce the Pupil Reconstructor with Subsequent Tracking (PuReST), a novel method for fast and robust pupil tracking. The proposed method was evaluated on over 266,000 realistic and challenging images acquired with three distinct head-mounted eye tracking devices, increasing pupil detection rate by 5.44 and 29.92 percentage points while reducing average run time by a factor of 2.74 and 1.1. w.r.t. state-of-the-art 1) pupil detectors and 2) vendor provided pupil trackers, respectively. Overall, PuReST outperformed other methods in 81.82% of use cases.
{"title":"PuReST","authors":"Thiago Santini, Wolfgang Fuhl, Enkelejda Kasneci","doi":"10.1145/3204493.3204578","DOIUrl":"https://doi.org/10.1145/3204493.3204578","url":null,"abstract":"Pervasive eye-tracking applications such as gaze-based human computer interaction and advanced driver assistance require real-time, accurate, and robust pupil detection. However, automated pupil detection has proved to be an intricate task in real-world scenarios due to a large mixture of challenges - for instance, quickly changing illumination and occlusions. In this work, we introduce the Pupil Reconstructor with Subsequent Tracking (PuReST), a novel method for fast and robust pupil tracking. The proposed method was evaluated on over 266,000 realistic and challenging images acquired with three distinct head-mounted eye tracking devices, increasing pupil detection rate by 5.44 and 29.92 percentage points while reducing average run time by a factor of 2.74 and 1.1. w.r.t. state-of-the-art 1) pupil detectors and 2) vendor provided pupil trackers, respectively. Overall, PuReST outperformed other methods in 81.82% of use cases.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121845892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}