Jeroen S. Benjamins, R. Hessels, Ignace T. C. Hooge
Purpose: Eye movements recorded with mobile eye trackers generally have to be mapped to the visual stimulus manually. Manufacturer software usually has sub-optimal user interfaces. Here, we compare our in-house developed open-source alternative to the manufacturer software, called GazeCode. Method: 330 seconds of eye movements were recorded with the Tobii Pro Glasses 2. Eight coders subsequently categorized fixations using both Tobii Pro Lab and GazeCode. Results: Average manual mapping speed was more than two times faster when using GazeCode (0.649 events/s) compared with Tobii Pro Lab (0.292 events/s). Inter-rater reliability (Cohen's Kappa) was similar and satisfactory; 0.886 for Tobii Pro Lab and 0.871 for GazeCode. Conclusion: GazeCode is a faster alternative to Tobii Pro Lab for mapping eye movements to the visual stimulus. Moreover, it accepts eye-tracking data from manufacturers SMI, Positive Science, Tobii, and Pupil Labs.
{"title":"Gazecode","authors":"Jeroen S. Benjamins, R. Hessels, Ignace T. C. Hooge","doi":"10.1145/3204493.3204568","DOIUrl":"https://doi.org/10.1145/3204493.3204568","url":null,"abstract":"Purpose: Eye movements recorded with mobile eye trackers generally have to be mapped to the visual stimulus manually. Manufacturer software usually has sub-optimal user interfaces. Here, we compare our in-house developed open-source alternative to the manufacturer software, called GazeCode. Method: 330 seconds of eye movements were recorded with the Tobii Pro Glasses 2. Eight coders subsequently categorized fixations using both Tobii Pro Lab and GazeCode. Results: Average manual mapping speed was more than two times faster when using GazeCode (0.649 events/s) compared with Tobii Pro Lab (0.292 events/s). Inter-rater reliability (Cohen's Kappa) was similar and satisfactory; 0.886 for Tobii Pro Lab and 0.871 for GazeCode. Conclusion: GazeCode is a faster alternative to Tobii Pro Lab for mapping eye movements to the visual stimulus. Moreover, it accepts eye-tracking data from manufacturers SMI, Positive Science, Tobii, and Pupil Labs.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115896523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The precise detection of pupil/iris center is key to estimate gaze accurately. This fact becomes specially challenging in low cost frameworks in which the algorithms employed for high performance systems fail. In the last years an outstanding effort has been made in order to apply training-based methods to low resolution images. In this paper, Supervised Descent Method (SDM) is applied to GI4E database. The 2D landmarks employed for training are the corners of the eyes and the pupil centers. In order to validate the algorithm proposed, a cross validation procedure is performed. The strategy employed for the training allows us to affirm that our method can potentially outperform the state of the art algorithms applied to the same dataset in terms of 2D accuracy. The promising results encourage to carry on in the study of training-based methods for eye tracking.
{"title":"Supervised descent method (SDM) applied to accurate pupil detection in off-the-shelf eye tracking systems","authors":"Andoni Larumbe, R. Cabeza, A. Villanueva","doi":"10.1145/3204493.3204551","DOIUrl":"https://doi.org/10.1145/3204493.3204551","url":null,"abstract":"The precise detection of pupil/iris center is key to estimate gaze accurately. This fact becomes specially challenging in low cost frameworks in which the algorithms employed for high performance systems fail. In the last years an outstanding effort has been made in order to apply training-based methods to low resolution images. In this paper, Supervised Descent Method (SDM) is applied to GI4E database. The 2D landmarks employed for training are the corners of the eyes and the pupil centers. In order to validate the algorithm proposed, a cross validation procedure is performed. The strategy employed for the training allows us to affirm that our method can potentially outperform the state of the art algorithms applied to the same dataset in terms of 2D accuracy. The promising results encourage to carry on in the study of training-based methods for eye tracking.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"162 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115998125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scanpath analysis is a controversial and important topic in eye tracking research. Previous work has shown the value of scanpath analysis in perceptual tasks; little research has examined its utility for understanding human reasoning in complex tasks. Here, we analyze n-grams, which are continuous ordered subsequences of participants' scanpaths. In particular we studied the length of n-grams that are most appropriate for this form of analysis. We reuse datasets from previous studies of human cognition, medical diagnosis and art, systematically analyzing the frequency of n-grams of increasing length, and compare this approach with a string alignment-based method. The results show that subsequences of four or more areas of interest may not be of value for finding patterns that distinguish between two groups. The study is the first to systematically define the parameters of the length of n-gram suitable for analysis, using an approach that holds across diverse domains.
{"title":"An investigation of the effects of n-gram length in scanpath analysis for eye-tracking research","authors":"Manuele Reani, Niels Peek, C. Jay","doi":"10.1145/3204493.3204527","DOIUrl":"https://doi.org/10.1145/3204493.3204527","url":null,"abstract":"Scanpath analysis is a controversial and important topic in eye tracking research. Previous work has shown the value of scanpath analysis in perceptual tasks; little research has examined its utility for understanding human reasoning in complex tasks. Here, we analyze n-grams, which are continuous ordered subsequences of participants' scanpaths. In particular we studied the length of n-grams that are most appropriate for this form of analysis. We reuse datasets from previous studies of human cognition, medical diagnosis and art, systematically analyzing the frequency of n-grams of increasing length, and compare this approach with a string alignment-based method. The results show that subsequences of four or more areas of interest may not be of value for finding patterns that distinguish between two groups. The study is the first to systematically define the parameters of the length of n-gram suitable for analysis, using an approach that holds across diverse domains.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114395536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A key requirement for training deep learning saliency models is large training eye tracking datasets. Despite the fact that the accessibility of eye tracking technology has greatly increased, collecting eye tracking data on a large scale for very specific content types is cumbersome, such as comic images, which are different from natural images such as photographs because text and pictorial content is integrated. In this paper, we show that a deep network trained on visual categories where the gaze deployment is similar to comics outperforms existing models and models trained with visual categories for which the gaze deployment is dramatically different from comics. Further, we find that it is better to use a computationally generated dataset on visual category close to comics one than real eye tracking data of a visual category that has different gaze deployment. These findings hold implications for the transference of deep networks to different domains.
{"title":"Deepcomics: saliency estimation for comics","authors":"Kévin Bannier, Eakta Jain, O. Meur","doi":"10.1145/3204493.3204560","DOIUrl":"https://doi.org/10.1145/3204493.3204560","url":null,"abstract":"A key requirement for training deep learning saliency models is large training eye tracking datasets. Despite the fact that the accessibility of eye tracking technology has greatly increased, collecting eye tracking data on a large scale for very specific content types is cumbersome, such as comic images, which are different from natural images such as photographs because text and pictorial content is integrated. In this paper, we show that a deep network trained on visual categories where the gaze deployment is similar to comics outperforms existing models and models trained with visual categories for which the gaze deployment is dramatically different from comics. Further, we find that it is better to use a computationally generated dataset on visual category close to comics one than real eye tracking data of a visual category that has different gaze deployment. These findings hold implications for the transference of deep networks to different domains.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116217097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Göbel, P. Kiefer, I. Giannopoulos, A. Duchowski, M. Raubal
Complex information visualizations, such as thematic maps, encode information using a particular symbology that often requires the use of a legend to explain its meaning. Traditional legends are placed at the edge of a visualization, which can be difficult to maintain visually while switching attention between content and legend. Moreover, an extensive search may be required to extract relevant information from the legend. In this paper we propose to consider the user's visual attention to improve interaction with a map legend by adapting both the legend's placement and content to the user's gaze. In a user study, we compared two novel adaptive legend behaviors to a traditional (non-adaptive) legend. We found that, with both of our approaches, participants spent significantly less task time looking at the legend than with the baseline approach. Furthermore, participants stated that they preferred the gaze-based approach of adapting the legend content (but not its placement).
{"title":"Improving map reading with gaze-adaptive legends","authors":"F. Göbel, P. Kiefer, I. Giannopoulos, A. Duchowski, M. Raubal","doi":"10.1145/3204493.3204544","DOIUrl":"https://doi.org/10.1145/3204493.3204544","url":null,"abstract":"Complex information visualizations, such as thematic maps, encode information using a particular symbology that often requires the use of a legend to explain its meaning. Traditional legends are placed at the edge of a visualization, which can be difficult to maintain visually while switching attention between content and legend. Moreover, an extensive search may be required to extract relevant information from the legend. In this paper we propose to consider the user's visual attention to improve interaction with a map legend by adapting both the legend's placement and content to the user's gaze. In a user study, we compared two novel adaptive legend behaviors to a traditional (non-adaptive) legend. We found that, with both of our approaches, participants spent significantly less task time looking at the legend than with the baseline approach. Furthermore, participants stated that they preferred the gaze-based approach of adapting the legend content (but not its placement).","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121019316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A novel gaze-assisted reading technique uses the fact that in linear reading, the looking behavior of the reader is readily predicted. We introduce the AutoPager "page turning" technique, where the next bit of unread text is rendered in the periphery, ready to be read. This approach enables continuous gaze-assisted reading without requiring manual input to scroll: the reader merely saccades to the top of the page to read on. We demonstrate that when the new text is introduced with a gradual cross-fade effect, users are often unaware of the change: the user's impression is of reading the same page over and over again, yet the content changes. We present a user evaluation that compares AutoPager to previous gaze-assisted scrolling techniques. AutoPager may offer some advantages over previous gaze-assisted reading techniques, and is a rare example of exploiting "change blindness" in user interfaces.
{"title":"Autopager","authors":"Andrew D. Wilson, Shane Williams","doi":"10.1145/3204493.3204556","DOIUrl":"https://doi.org/10.1145/3204493.3204556","url":null,"abstract":"A novel gaze-assisted reading technique uses the fact that in linear reading, the looking behavior of the reader is readily predicted. We introduce the AutoPager \"page turning\" technique, where the next bit of unread text is rendered in the periphery, ready to be read. This approach enables continuous gaze-assisted reading without requiring manual input to scroll: the reader merely saccades to the top of the page to read on. We demonstrate that when the new text is introduced with a gradual cross-fade effect, users are often unaware of the change: the user's impression is of reading the same page over and over again, yet the content changes. We present a user evaluation that compares AutoPager to previous gaze-assisted scrolling techniques. AutoPager may offer some advantages over previous gaze-assisted reading techniques, and is a rare example of exploiting \"change blindness\" in user interfaces.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126634193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Exploratory analysis of gaze data requires methods that make it possible to process large amounts of data while minimizing human labor. The conventional approach in exploring gaze data is to construct heatmap visualizations. While simple and intuitive, conventional heatmaps do not clearly indicate differences between groups of viewers or give estimates for the repeatability (i.e., which parts of the heatmap would look similar if the data were collected again). We discuss difference maps and significance maps that answer to these needs. In addition we describe methods based on automatic clustering that allow us to achieve similar results with cluster observation maps and transition maps. As demonstrated with our example data, these methods are effective in highlighting the strongest differences between groups more effectively than conventional heatmaps.
{"title":"Useful approaches to exploratory analysis of gaze data: enhanced heatmaps, cluster maps, and transition maps","authors":"Poika Isokoski, J. Kangas, P. Majaranta","doi":"10.1145/3204493.3204591","DOIUrl":"https://doi.org/10.1145/3204493.3204591","url":null,"abstract":"Exploratory analysis of gaze data requires methods that make it possible to process large amounts of data while minimizing human labor. The conventional approach in exploring gaze data is to construct heatmap visualizations. While simple and intuitive, conventional heatmaps do not clearly indicate differences between groups of viewers or give estimates for the repeatability (i.e., which parts of the heatmap would look similar if the data were collected again). We discuss difference maps and significance maps that answer to these needs. In addition we describe methods based on automatic clustering that allow us to achieve similar results with cluster observation maps and transition maps. As demonstrated with our example data, these methods are effective in highlighting the strongest differences between groups more effectively than conventional heatmaps.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"240 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126515776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, with the development of eye tracking technology, the gaze-interaction applications demonstrate great potential. Smooth pursuit based gaze typing is an intuitive text entry system with low learning effort. In this study, we provide a language-prediction function for a smooth-pursuit based gaze-typing system. Since the state-of-the-art neural network models have been successfully applied in language modeling, this study uses a pretrained model based on convolutional neural networks (CNNs) and develops a prediction function, which can predict both next possible letters and word. The results of a pilot experiment have shown that the next possible letters or word can be well predicted and selected. The mean typing speed can achieve 4.5 words per minute. The participants consider that the word prediction is helpful for reducing the visual search time.
{"title":"A text entry interface using smooth pursuit movements and language model","authors":"Zhe Zeng, M. Rötting","doi":"10.1145/3204493.3207413","DOIUrl":"https://doi.org/10.1145/3204493.3207413","url":null,"abstract":"Nowadays, with the development of eye tracking technology, the gaze-interaction applications demonstrate great potential. Smooth pursuit based gaze typing is an intuitive text entry system with low learning effort. In this study, we provide a language-prediction function for a smooth-pursuit based gaze-typing system. Since the state-of-the-art neural network models have been successfully applied in language modeling, this study uses a pretrained model based on convolutional neural networks (CNNs) and develops a prediction function, which can predict both next possible letters and word. The results of a pilot experiment have shown that the next possible letters or word can be well predicted and selected. The mean typing speed can achieve 4.5 words per minute. The participants consider that the word prediction is helpful for reducing the visual search time.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122733233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Capturing Purkinje images is essential for wide-range and accurate eye-tracking. The range of eye rotation over which the Purkinje image is observable has so far been modeled by a cone shape called a gaze cone. In this study, we extended the gaze cone model to include occlusion by an eyelid. First, we developed a measurement device that has eight spider-like arms. Then, we proposed a novel model that considers eyeball movement. Using the device, we measured the range of corneal reflection, and we fitted the proposed model to the results.
{"title":"Modeling corneal reflection for eye-tracking considering eyelid occlusion","authors":"Michiya Yamamoto, Ryoma Matsuo, Satoshi Fukumori, Takashi Nagamatsu","doi":"10.1145/3204493.3208337","DOIUrl":"https://doi.org/10.1145/3204493.3208337","url":null,"abstract":"Capturing Purkinje images is essential for wide-range and accurate eye-tracking. The range of eye rotation over which the Purkinje image is observable has so far been modeled by a cone shape called a gaze cone. In this study, we extended the gaze cone model to include occlusion by an eyelid. First, we developed a measurement device that has eight spider-like arms. Then, we proposed a novel model that considers eyeball movement. Using the device, we measured the range of corneal reflection, and we fitted the proposed model to the results.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131736153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eduardo Velloso, F. Coutinho, Andrew T. N. Kurauchi, C. Morimoto
Recently, interaction techniques in which the user selects screen targets by matching their movement with the input device have been gaining popularity, particularly in the context of gaze interaction (e.g. Pursuits, Orbits, AmbiGaze, etc.). However, though many algorithms for enabling such interaction techniques have been proposed, we still lack an understanding of how they compare to each other. In this paper, we introduce two new algorithms for matching eye movements: Profile Matching and 2D Correlation, and present a systematic comparison of these algorithms with two other state-of-the-art algorithms: the Basic Correlation algorithm used in Pursuits and the Rotated Correlation algorithm used in PathSync. We also examine the effects of two thresholding techniques and post-hoc filtering. We evaluated the algorithms on a user dataset and found the 2D Correlation with one-level thresholding and post-hoc filtering to be the best performing algorithm.
{"title":"Circular orbits detection for gaze interaction using 2D correlation and profile matching algorithms","authors":"Eduardo Velloso, F. Coutinho, Andrew T. N. Kurauchi, C. Morimoto","doi":"10.1145/3204493.3204524","DOIUrl":"https://doi.org/10.1145/3204493.3204524","url":null,"abstract":"Recently, interaction techniques in which the user selects screen targets by matching their movement with the input device have been gaining popularity, particularly in the context of gaze interaction (e.g. Pursuits, Orbits, AmbiGaze, etc.). However, though many algorithms for enabling such interaction techniques have been proposed, we still lack an understanding of how they compare to each other. In this paper, we introduce two new algorithms for matching eye movements: Profile Matching and 2D Correlation, and present a systematic comparison of these algorithms with two other state-of-the-art algorithms: the Basic Correlation algorithm used in Pursuits and the Rotated Correlation algorithm used in PathSync. We also examine the effects of two thresholding techniques and post-hoc filtering. We evaluated the algorithms on a user dataset and found the 2D Correlation with one-level thresholding and post-hoc filtering to be the best performing algorithm.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130368365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}