首页 > 最新文献

Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications最新文献

英文 中文
Gazecode Gazecode
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204568
Jeroen S. Benjamins, R. Hessels, Ignace T. C. Hooge
Purpose: Eye movements recorded with mobile eye trackers generally have to be mapped to the visual stimulus manually. Manufacturer software usually has sub-optimal user interfaces. Here, we compare our in-house developed open-source alternative to the manufacturer software, called GazeCode. Method: 330 seconds of eye movements were recorded with the Tobii Pro Glasses 2. Eight coders subsequently categorized fixations using both Tobii Pro Lab and GazeCode. Results: Average manual mapping speed was more than two times faster when using GazeCode (0.649 events/s) compared with Tobii Pro Lab (0.292 events/s). Inter-rater reliability (Cohen's Kappa) was similar and satisfactory; 0.886 for Tobii Pro Lab and 0.871 for GazeCode. Conclusion: GazeCode is a faster alternative to Tobii Pro Lab for mapping eye movements to the visual stimulus. Moreover, it accepts eye-tracking data from manufacturers SMI, Positive Science, Tobii, and Pupil Labs.
{"title":"Gazecode","authors":"Jeroen S. Benjamins, R. Hessels, Ignace T. C. Hooge","doi":"10.1145/3204493.3204568","DOIUrl":"https://doi.org/10.1145/3204493.3204568","url":null,"abstract":"Purpose: Eye movements recorded with mobile eye trackers generally have to be mapped to the visual stimulus manually. Manufacturer software usually has sub-optimal user interfaces. Here, we compare our in-house developed open-source alternative to the manufacturer software, called GazeCode. Method: 330 seconds of eye movements were recorded with the Tobii Pro Glasses 2. Eight coders subsequently categorized fixations using both Tobii Pro Lab and GazeCode. Results: Average manual mapping speed was more than two times faster when using GazeCode (0.649 events/s) compared with Tobii Pro Lab (0.292 events/s). Inter-rater reliability (Cohen's Kappa) was similar and satisfactory; 0.886 for Tobii Pro Lab and 0.871 for GazeCode. Conclusion: GazeCode is a faster alternative to Tobii Pro Lab for mapping eye movements to the visual stimulus. Moreover, it accepts eye-tracking data from manufacturers SMI, Positive Science, Tobii, and Pupil Labs.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115896523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Supervised descent method (SDM) applied to accurate pupil detection in off-the-shelf eye tracking systems 监督下降法(SDM)用于眼动追踪系统中瞳孔的精确检测
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204551
Andoni Larumbe, R. Cabeza, A. Villanueva
The precise detection of pupil/iris center is key to estimate gaze accurately. This fact becomes specially challenging in low cost frameworks in which the algorithms employed for high performance systems fail. In the last years an outstanding effort has been made in order to apply training-based methods to low resolution images. In this paper, Supervised Descent Method (SDM) is applied to GI4E database. The 2D landmarks employed for training are the corners of the eyes and the pupil centers. In order to validate the algorithm proposed, a cross validation procedure is performed. The strategy employed for the training allows us to affirm that our method can potentially outperform the state of the art algorithms applied to the same dataset in terms of 2D accuracy. The promising results encourage to carry on in the study of training-based methods for eye tracking.
瞳孔/虹膜中心的精确检测是准确估计注视的关键。这一事实在低成本框架中变得特别具有挑战性,其中用于高性能系统的算法失败。在过去的几年里,为了将基于训练的方法应用于低分辨率图像,已经做出了杰出的努力。本文将监督下降法(SDM)应用于GI4E数据库。用于训练的二维地标是眼角和瞳孔中心。为了验证所提出的算法,执行了一个交叉验证过程。用于训练的策略使我们能够确认,我们的方法在2D精度方面可以潜在地优于应用于相同数据集的最先进算法。这一结果鼓励了基于训练的眼动追踪方法的研究。
{"title":"Supervised descent method (SDM) applied to accurate pupil detection in off-the-shelf eye tracking systems","authors":"Andoni Larumbe, R. Cabeza, A. Villanueva","doi":"10.1145/3204493.3204551","DOIUrl":"https://doi.org/10.1145/3204493.3204551","url":null,"abstract":"The precise detection of pupil/iris center is key to estimate gaze accurately. This fact becomes specially challenging in low cost frameworks in which the algorithms employed for high performance systems fail. In the last years an outstanding effort has been made in order to apply training-based methods to low resolution images. In this paper, Supervised Descent Method (SDM) is applied to GI4E database. The 2D landmarks employed for training are the corners of the eyes and the pupil centers. In order to validate the algorithm proposed, a cross validation procedure is performed. The strategy employed for the training allows us to affirm that our method can potentially outperform the state of the art algorithms applied to the same dataset in terms of 2D accuracy. The promising results encourage to carry on in the study of training-based methods for eye tracking.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"162 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115998125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
An investigation of the effects of n-gram length in scanpath analysis for eye-tracking research 眼动追踪研究中n图长度对扫描路径分析影响的研究
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204527
Manuele Reani, Niels Peek, C. Jay
Scanpath analysis is a controversial and important topic in eye tracking research. Previous work has shown the value of scanpath analysis in perceptual tasks; little research has examined its utility for understanding human reasoning in complex tasks. Here, we analyze n-grams, which are continuous ordered subsequences of participants' scanpaths. In particular we studied the length of n-grams that are most appropriate for this form of analysis. We reuse datasets from previous studies of human cognition, medical diagnosis and art, systematically analyzing the frequency of n-grams of increasing length, and compare this approach with a string alignment-based method. The results show that subsequences of four or more areas of interest may not be of value for finding patterns that distinguish between two groups. The study is the first to systematically define the parameters of the length of n-gram suitable for analysis, using an approach that holds across diverse domains.
扫描路径分析是眼动追踪研究中一个有争议的重要课题。先前的工作已经表明扫描路径分析在感知任务中的价值;很少有研究检验它在理解复杂任务中的人类推理方面的效用。在这里,我们分析n-图,它是参与者扫描路径的连续有序子序列。特别地,我们研究了最适合这种分析形式的n-图的长度。我们重用了先前人类认知、医学诊断和艺术研究中的数据集,系统地分析了长度增加的n-图的频率,并将该方法与基于字符串对齐的方法进行了比较。结果表明,四个或更多感兴趣区域的子序列可能对寻找区分两组的模式没有价值。该研究首次系统地定义了适合分析的n-gram长度的参数,使用了一种跨越不同领域的方法。
{"title":"An investigation of the effects of n-gram length in scanpath analysis for eye-tracking research","authors":"Manuele Reani, Niels Peek, C. Jay","doi":"10.1145/3204493.3204527","DOIUrl":"https://doi.org/10.1145/3204493.3204527","url":null,"abstract":"Scanpath analysis is a controversial and important topic in eye tracking research. Previous work has shown the value of scanpath analysis in perceptual tasks; little research has examined its utility for understanding human reasoning in complex tasks. Here, we analyze n-grams, which are continuous ordered subsequences of participants' scanpaths. In particular we studied the length of n-grams that are most appropriate for this form of analysis. We reuse datasets from previous studies of human cognition, medical diagnosis and art, systematically analyzing the frequency of n-grams of increasing length, and compare this approach with a string alignment-based method. The results show that subsequences of four or more areas of interest may not be of value for finding patterns that distinguish between two groups. The study is the first to systematically define the parameters of the length of n-gram suitable for analysis, using an approach that holds across diverse domains.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114395536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Deepcomics: saliency estimation for comics 深度漫画:漫画的显著性估计
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204560
Kévin Bannier, Eakta Jain, O. Meur
A key requirement for training deep learning saliency models is large training eye tracking datasets. Despite the fact that the accessibility of eye tracking technology has greatly increased, collecting eye tracking data on a large scale for very specific content types is cumbersome, such as comic images, which are different from natural images such as photographs because text and pictorial content is integrated. In this paper, we show that a deep network trained on visual categories where the gaze deployment is similar to comics outperforms existing models and models trained with visual categories for which the gaze deployment is dramatically different from comics. Further, we find that it is better to use a computationally generated dataset on visual category close to comics one than real eye tracking data of a visual category that has different gaze deployment. These findings hold implications for the transference of deep networks to different domains.
训练深度学习显著性模型的一个关键要求是大型训练眼动追踪数据集。尽管眼动追踪技术的可及性大大提高,但对于非常具体的内容类型,如漫画图像,大规模收集眼动追踪数据是很麻烦的,因为漫画图像与照片等自然图像不同,因为文字和图像内容是一体的。在本文中,我们证明了在注视部署与漫画相似的视觉类别上训练的深度网络优于现有模型和用注视部署与漫画截然不同的视觉类别训练的模型。此外,我们发现在接近漫画的视觉类别上使用计算生成的数据比使用具有不同凝视部署的视觉类别的真实眼动追踪数据更好。这些发现对深度网络向不同领域的迁移具有启示意义。
{"title":"Deepcomics: saliency estimation for comics","authors":"Kévin Bannier, Eakta Jain, O. Meur","doi":"10.1145/3204493.3204560","DOIUrl":"https://doi.org/10.1145/3204493.3204560","url":null,"abstract":"A key requirement for training deep learning saliency models is large training eye tracking datasets. Despite the fact that the accessibility of eye tracking technology has greatly increased, collecting eye tracking data on a large scale for very specific content types is cumbersome, such as comic images, which are different from natural images such as photographs because text and pictorial content is integrated. In this paper, we show that a deep network trained on visual categories where the gaze deployment is similar to comics outperforms existing models and models trained with visual categories for which the gaze deployment is dramatically different from comics. Further, we find that it is better to use a computationally generated dataset on visual category close to comics one than real eye tracking data of a visual category that has different gaze deployment. These findings hold implications for the transference of deep networks to different domains.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116217097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Improving map reading with gaze-adaptive legends 改善地图阅读与视线自适应图例
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204544
F. Göbel, P. Kiefer, I. Giannopoulos, A. Duchowski, M. Raubal
Complex information visualizations, such as thematic maps, encode information using a particular symbology that often requires the use of a legend to explain its meaning. Traditional legends are placed at the edge of a visualization, which can be difficult to maintain visually while switching attention between content and legend. Moreover, an extensive search may be required to extract relevant information from the legend. In this paper we propose to consider the user's visual attention to improve interaction with a map legend by adapting both the legend's placement and content to the user's gaze. In a user study, we compared two novel adaptive legend behaviors to a traditional (non-adaptive) legend. We found that, with both of our approaches, participants spent significantly less task time looking at the legend than with the baseline approach. Furthermore, participants stated that they preferred the gaze-based approach of adapting the legend content (but not its placement).
复杂的信息可视化,如专题地图,使用特定的符号对信息进行编码,通常需要使用图例来解释其含义。传统的图例位于可视化的边缘,在内容和图例之间切换注意力时很难在视觉上保持。此外,可能需要进行广泛的搜索以从图例中提取相关信息。在本文中,我们建议考虑用户的视觉注意力,通过调整图例的位置和内容来改善与地图图例的交互。在一项用户研究中,我们将两种新的自适应图例行为与传统的(非自适应)图例行为进行了比较。我们发现,使用我们的两种方法,参与者在图例上花费的任务时间明显少于使用基线方法。此外,参与者表示他们更喜欢基于注视的方法来调整图例内容(而不是其位置)。
{"title":"Improving map reading with gaze-adaptive legends","authors":"F. Göbel, P. Kiefer, I. Giannopoulos, A. Duchowski, M. Raubal","doi":"10.1145/3204493.3204544","DOIUrl":"https://doi.org/10.1145/3204493.3204544","url":null,"abstract":"Complex information visualizations, such as thematic maps, encode information using a particular symbology that often requires the use of a legend to explain its meaning. Traditional legends are placed at the edge of a visualization, which can be difficult to maintain visually while switching attention between content and legend. Moreover, an extensive search may be required to extract relevant information from the legend. In this paper we propose to consider the user's visual attention to improve interaction with a map legend by adapting both the legend's placement and content to the user's gaze. In a user study, we compared two novel adaptive legend behaviors to a traditional (non-adaptive) legend. We found that, with both of our approaches, participants spent significantly less task time looking at the legend than with the baseline approach. Furthermore, participants stated that they preferred the gaze-based approach of adapting the legend content (but not its placement).","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121019316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Autopager
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204556
Andrew D. Wilson, Shane Williams
A novel gaze-assisted reading technique uses the fact that in linear reading, the looking behavior of the reader is readily predicted. We introduce the AutoPager "page turning" technique, where the next bit of unread text is rendered in the periphery, ready to be read. This approach enables continuous gaze-assisted reading without requiring manual input to scroll: the reader merely saccades to the top of the page to read on. We demonstrate that when the new text is introduced with a gradual cross-fade effect, users are often unaware of the change: the user's impression is of reading the same page over and over again, yet the content changes. We present a user evaluation that compares AutoPager to previous gaze-assisted scrolling techniques. AutoPager may offer some advantages over previous gaze-assisted reading techniques, and is a rare example of exploiting "change blindness" in user interfaces.
{"title":"Autopager","authors":"Andrew D. Wilson, Shane Williams","doi":"10.1145/3204493.3204556","DOIUrl":"https://doi.org/10.1145/3204493.3204556","url":null,"abstract":"A novel gaze-assisted reading technique uses the fact that in linear reading, the looking behavior of the reader is readily predicted. We introduce the AutoPager \"page turning\" technique, where the next bit of unread text is rendered in the periphery, ready to be read. This approach enables continuous gaze-assisted reading without requiring manual input to scroll: the reader merely saccades to the top of the page to read on. We demonstrate that when the new text is introduced with a gradual cross-fade effect, users are often unaware of the change: the user's impression is of reading the same page over and over again, yet the content changes. We present a user evaluation that compares AutoPager to previous gaze-assisted scrolling techniques. AutoPager may offer some advantages over previous gaze-assisted reading techniques, and is a rare example of exploiting \"change blindness\" in user interfaces.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126634193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Useful approaches to exploratory analysis of gaze data: enhanced heatmaps, cluster maps, and transition maps 对凝视数据进行探索性分析的有用方法:增强的热图、聚类图和过渡图
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204591
Poika Isokoski, J. Kangas, P. Majaranta
Exploratory analysis of gaze data requires methods that make it possible to process large amounts of data while minimizing human labor. The conventional approach in exploring gaze data is to construct heatmap visualizations. While simple and intuitive, conventional heatmaps do not clearly indicate differences between groups of viewers or give estimates for the repeatability (i.e., which parts of the heatmap would look similar if the data were collected again). We discuss difference maps and significance maps that answer to these needs. In addition we describe methods based on automatic clustering that allow us to achieve similar results with cluster observation maps and transition maps. As demonstrated with our example data, these methods are effective in highlighting the strongest differences between groups more effectively than conventional heatmaps.
凝视数据的探索性分析需要能够在最小化人力劳动的同时处理大量数据的方法。探索凝视数据的传统方法是构建热图可视化。虽然简单直观,但传统的热图并不能清楚地表明不同观众群体之间的差异,也不能估计其可重复性(即,如果再次收集数据,热图的哪些部分看起来相似)。我们将讨论满足这些需求的差异图和重要性图。此外,我们描述了基于自动聚类的方法,使我们能够通过聚类观察图和过渡图获得类似的结果。正如我们的示例数据所示,这些方法比传统的热图更有效地突出了组间最大的差异。
{"title":"Useful approaches to exploratory analysis of gaze data: enhanced heatmaps, cluster maps, and transition maps","authors":"Poika Isokoski, J. Kangas, P. Majaranta","doi":"10.1145/3204493.3204591","DOIUrl":"https://doi.org/10.1145/3204493.3204591","url":null,"abstract":"Exploratory analysis of gaze data requires methods that make it possible to process large amounts of data while minimizing human labor. The conventional approach in exploring gaze data is to construct heatmap visualizations. While simple and intuitive, conventional heatmaps do not clearly indicate differences between groups of viewers or give estimates for the repeatability (i.e., which parts of the heatmap would look similar if the data were collected again). We discuss difference maps and significance maps that answer to these needs. In addition we describe methods based on automatic clustering that allow us to achieve similar results with cluster observation maps and transition maps. As demonstrated with our example data, these methods are effective in highlighting the strongest differences between groups more effectively than conventional heatmaps.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"240 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126515776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A text entry interface using smooth pursuit movements and language model 一个使用平滑追踪动作和语言模型的文本输入界面
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3207413
Zhe Zeng, M. Rötting
Nowadays, with the development of eye tracking technology, the gaze-interaction applications demonstrate great potential. Smooth pursuit based gaze typing is an intuitive text entry system with low learning effort. In this study, we provide a language-prediction function for a smooth-pursuit based gaze-typing system. Since the state-of-the-art neural network models have been successfully applied in language modeling, this study uses a pretrained model based on convolutional neural networks (CNNs) and develops a prediction function, which can predict both next possible letters and word. The results of a pilot experiment have shown that the next possible letters or word can be well predicted and selected. The mean typing speed can achieve 4.5 words per minute. The participants consider that the word prediction is helpful for reducing the visual search time.
如今,随着眼动追踪技术的发展,目光交互的应用显示出巨大的潜力。基于平滑追踪的注视输入是一种低学习成本的直观文本输入系统。在这项研究中,我们为基于平滑追踪的注视类型系统提供了一个语言预测函数。由于最先进的神经网络模型已经成功地应用于语言建模,本研究使用基于卷积神经网络(cnn)的预训练模型,并开发了一个预测函数,该函数可以预测下一个可能的字母和单词。一项初步实验的结果表明,下一个可能的字母或单词可以很好地预测和选择。平均打字速度可达每分钟4.5字。参与者认为单词预测有助于减少视觉搜索时间。
{"title":"A text entry interface using smooth pursuit movements and language model","authors":"Zhe Zeng, M. Rötting","doi":"10.1145/3204493.3207413","DOIUrl":"https://doi.org/10.1145/3204493.3207413","url":null,"abstract":"Nowadays, with the development of eye tracking technology, the gaze-interaction applications demonstrate great potential. Smooth pursuit based gaze typing is an intuitive text entry system with low learning effort. In this study, we provide a language-prediction function for a smooth-pursuit based gaze-typing system. Since the state-of-the-art neural network models have been successfully applied in language modeling, this study uses a pretrained model based on convolutional neural networks (CNNs) and develops a prediction function, which can predict both next possible letters and word. The results of a pilot experiment have shown that the next possible letters or word can be well predicted and selected. The mean typing speed can achieve 4.5 words per minute. The participants consider that the word prediction is helpful for reducing the visual search time.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122733233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Modeling corneal reflection for eye-tracking considering eyelid occlusion 考虑眼睑遮挡的眼动追踪角膜反射建模
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208337
Michiya Yamamoto, Ryoma Matsuo, Satoshi Fukumori, Takashi Nagamatsu
Capturing Purkinje images is essential for wide-range and accurate eye-tracking. The range of eye rotation over which the Purkinje image is observable has so far been modeled by a cone shape called a gaze cone. In this study, we extended the gaze cone model to include occlusion by an eyelid. First, we developed a measurement device that has eight spider-like arms. Then, we proposed a novel model that considers eyeball movement. Using the device, we measured the range of corneal reflection, and we fitted the proposed model to the results.
捕捉浦肯野图像对于大范围和准确的眼球追踪至关重要。眼睛转动的范围的浦肯野图像建模可观测到目前为止被称为凝视锥锥形状。在这项研究中,我们扩展了凝视锥模型,以包括眼睑遮挡。首先,我们开发了一种测量装置,它有八只蜘蛛状的手臂。然后,我们提出了一个考虑眼球运动的新模型。利用该装置,我们测量了角膜反射的范围,并将所提出的模型拟合到结果中。
{"title":"Modeling corneal reflection for eye-tracking considering eyelid occlusion","authors":"Michiya Yamamoto, Ryoma Matsuo, Satoshi Fukumori, Takashi Nagamatsu","doi":"10.1145/3204493.3208337","DOIUrl":"https://doi.org/10.1145/3204493.3208337","url":null,"abstract":"Capturing Purkinje images is essential for wide-range and accurate eye-tracking. The range of eye rotation over which the Purkinje image is observable has so far been modeled by a cone shape called a gaze cone. In this study, we extended the gaze cone model to include occlusion by an eyelid. First, we developed a measurement device that has eight spider-like arms. Then, we proposed a novel model that considers eyeball movement. Using the device, we measured the range of corneal reflection, and we fitted the proposed model to the results.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131736153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Circular orbits detection for gaze interaction using 2D correlation and profile matching algorithms 基于二维相关和轮廓匹配算法的凝视交互圆轨道检测
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204524
Eduardo Velloso, F. Coutinho, Andrew T. N. Kurauchi, C. Morimoto
Recently, interaction techniques in which the user selects screen targets by matching their movement with the input device have been gaining popularity, particularly in the context of gaze interaction (e.g. Pursuits, Orbits, AmbiGaze, etc.). However, though many algorithms for enabling such interaction techniques have been proposed, we still lack an understanding of how they compare to each other. In this paper, we introduce two new algorithms for matching eye movements: Profile Matching and 2D Correlation, and present a systematic comparison of these algorithms with two other state-of-the-art algorithms: the Basic Correlation algorithm used in Pursuits and the Rotated Correlation algorithm used in PathSync. We also examine the effects of two thresholding techniques and post-hoc filtering. We evaluated the algorithms on a user dataset and found the 2D Correlation with one-level thresholding and post-hoc filtering to be the best performing algorithm.
最近,用户通过将他们的运动与输入设备相匹配来选择屏幕目标的交互技术越来越受欢迎,特别是在注视交互的背景下(例如,追击,轨道,歧义等)。然而,尽管已经提出了许多实现这种交互技术的算法,但我们仍然缺乏对它们如何相互比较的理解。在本文中,我们介绍了两种新的匹配眼球运动的算法:轮廓匹配和二维相关,并将这些算法与另外两种最先进的算法进行了系统的比较:追求中使用的基本相关算法和PathSync中使用的旋转相关算法。我们还研究了两种阈值技术和事后滤波的效果。我们在用户数据集上评估了这些算法,发现带有一级阈值和事后过滤的2D相关性是性能最好的算法。
{"title":"Circular orbits detection for gaze interaction using 2D correlation and profile matching algorithms","authors":"Eduardo Velloso, F. Coutinho, Andrew T. N. Kurauchi, C. Morimoto","doi":"10.1145/3204493.3204524","DOIUrl":"https://doi.org/10.1145/3204493.3204524","url":null,"abstract":"Recently, interaction techniques in which the user selects screen targets by matching their movement with the input device have been gaining popularity, particularly in the context of gaze interaction (e.g. Pursuits, Orbits, AmbiGaze, etc.). However, though many algorithms for enabling such interaction techniques have been proposed, we still lack an understanding of how they compare to each other. In this paper, we introduce two new algorithms for matching eye movements: Profile Matching and 2D Correlation, and present a systematic comparison of these algorithms with two other state-of-the-art algorithms: the Basic Correlation algorithm used in Pursuits and the Rotated Correlation algorithm used in PathSync. We also examine the effects of two thresholding techniques and post-hoc filtering. We evaluated the algorithms on a user dataset and found the 2D Correlation with one-level thresholding and post-hoc filtering to be the best performing algorithm.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130368365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
期刊
Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1