首页 > 最新文献

Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications最新文献

英文 中文
Tracing gaze-following behavior in virtual reality using wiener-granger causality 利用维纳-格兰杰因果关系追踪虚拟现实中的注视跟随行为
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208332
Marius Rubo, M. Gamer
We modelled gaze following behavior in a naturalistic virtual reality environment using Wiener-Granger causality. Using this method, gaze following was statistically tangible throughout the experiment, but could not easily be pinpointed to precise moments in time.
我们在一个自然的虚拟现实环境中使用维纳-格兰杰因果关系建模凝视跟随行为。使用这种方法,在整个实验过程中,目光跟随在统计上是有形的,但不容易精确地定位到精确的时刻。
{"title":"Tracing gaze-following behavior in virtual reality using wiener-granger causality","authors":"Marius Rubo, M. Gamer","doi":"10.1145/3204493.3208332","DOIUrl":"https://doi.org/10.1145/3204493.3208332","url":null,"abstract":"We modelled gaze following behavior in a naturalistic virtual reality environment using Wiener-Granger causality. Using this method, gaze following was statistically tangible throughout the experiment, but could not easily be pinpointed to precise moments in time.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124501110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Systematic shifts of fixation disparity accompanying brightness changes 视差随亮度变化而发生系统的变化
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204587
A. Huckauf
Video-based gaze tracking is prone to brightness changes due to their effects on pupil size. Monocular observations indeed confirm variable fixation locations depending on brightness. In close viewing, pupil size is coupled with accommodation and vergence, the so-called near triad. Hence, systematic changes in fixation disparity might be expected to co-occur with varying pupil size. In the current experiment, fixation disparity was assessed. Calibration was conducted either on dark or on bright background, and text had to be read on both backgrounds, on a self-illuminating screen and on paper. When calibration background matches background during reading, mean fixation disparity did not differ from zero. In the non-calibrated conditions, however, a brighter stimulus went along with a dominance of crossed fixations and vice versa. The data demonstrate that systematic changes in fixation disparity occur as effect of brightness changes advising for careful setting calibration parameters.
基于视频的注视跟踪由于瞳孔大小的影响,容易出现亮度变化。单目观察确实证实了不同的固定位置取决于亮度。在近距离观察时,瞳孔大小与调节和收敛度有关,即所谓的近三合一。因此,注视视差的系统性变化可能与瞳孔大小的变化同时发生。本实验对注视视差进行了评估。校准在黑暗或明亮的背景下进行,文本必须在两种背景下阅读,在自发光屏幕上和在纸上。读数时,校正背景与背景匹配时,平均注视视差不为零。然而,在非校准条件下,更亮的刺激伴随着交叉注视的优势,反之亦然。数据表明,视差的系统性变化是由于亮度变化的影响,建议仔细设置校准参数。
{"title":"Systematic shifts of fixation disparity accompanying brightness changes","authors":"A. Huckauf","doi":"10.1145/3204493.3204587","DOIUrl":"https://doi.org/10.1145/3204493.3204587","url":null,"abstract":"Video-based gaze tracking is prone to brightness changes due to their effects on pupil size. Monocular observations indeed confirm variable fixation locations depending on brightness. In close viewing, pupil size is coupled with accommodation and vergence, the so-called near triad. Hence, systematic changes in fixation disparity might be expected to co-occur with varying pupil size. In the current experiment, fixation disparity was assessed. Calibration was conducted either on dark or on bright background, and text had to be read on both backgrounds, on a self-illuminating screen and on paper. When calibration background matches background during reading, mean fixation disparity did not differ from zero. In the non-calibrated conditions, however, a brighter stimulus went along with a dominance of crossed fixations and vice versa. The data demonstrate that systematic changes in fixation disparity occur as effect of brightness changes advising for careful setting calibration parameters.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129453080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Eye-tracking measures in audiovisual stimuli in infants at high genetic risk for ASD: challenging issues ASD高遗传风险婴儿的视听刺激眼球追踪措施:具有挑战性的问题
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3207423
Itziar Lozano, R. Campos, M. Belinchón
Individuals with autism spectrum disorder (ASD) have shown difficulties to integrate auditory and visual sensory modalities. Here we aim to explore if very young infants at genetic risk of ASD show atypicalities in this ability early in development. We registered visual attention of 4-month-old infants in a task using audiovisual natural stimuli (speaking faces). The complexity of this information and the attentional features of this population, among others, involves a great amount of challenges regarding data quality obtained with an eye-tracker. Here we discuss some of them and draw possible solutions.
患有自闭症谱系障碍(ASD)的个体在整合听觉和视觉感觉模式方面表现出困难。在这里,我们的目的是探索具有自闭症遗传风险的非常年幼的婴儿在发育早期是否表现出这种能力的非典型。我们在一项使用视听自然刺激(说话的面孔)的任务中记录了4个月大的婴儿的视觉注意力。这些信息的复杂性和这一人群的注意力特征,以及其他方面,涉及到使用眼动仪获得的数据质量方面的大量挑战。在这里,我们讨论其中的一些问题,并提出可能的解决方案。
{"title":"Eye-tracking measures in audiovisual stimuli in infants at high genetic risk for ASD: challenging issues","authors":"Itziar Lozano, R. Campos, M. Belinchón","doi":"10.1145/3204493.3207423","DOIUrl":"https://doi.org/10.1145/3204493.3207423","url":null,"abstract":"Individuals with autism spectrum disorder (ASD) have shown difficulties to integrate auditory and visual sensory modalities. Here we aim to explore if very young infants at genetic risk of ASD show atypicalities in this ability early in development. We registered visual attention of 4-month-old infants in a task using audiovisual natural stimuli (speaking faces). The complexity of this information and the attentional features of this population, among others, involves a great amount of challenges regarding data quality obtained with an eye-tracker. Here we discuss some of them and draw possible solutions.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133077032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications 2018年ACM眼动追踪研究与应用研讨会论文集
{"title":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","authors":"","doi":"10.1145/3204493","DOIUrl":"https://doi.org/10.1145/3204493","url":null,"abstract":"","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133436690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards using the spatio-temporal properties of eye movements to classify visual field defects 利用眼球运动的时空特性对视野缺陷进行分类的研究
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204590
A. Grillini, Daniel Ombelet, R. S. Soans, F. Cornelissen
Perimetry---assessment of visual field defects (VFD)---requires patients to be able to maintain a prolonged stable fixation, as well as to provide feedback through motor response. These aspects limit the testable population and often lead to inaccurate results. We hypothesized that different VFD would alter the eye-movements in systematic ways, thus making it possible to infer the presence of VFD by quantifying the spatio-temporal properties of eye movements. We developed a tracking test to record participant's eye-movements while we simulated different gaze-contingent VFD. We tested 50 visually healthy participants and simulated three common scotomas: peripheral loss, central loss and hemifield loss. We quantified spatio-temporal features using cross-correlogram analysis, then applied cross-validation to train a decision tree algorithm to classify the conditions. Our test is faster and more comfortable than standard perimetry and can achieve a classifying accuracy of ∼90% (True Positive Rate = ∼98%) with data acquired in less than 2 minutes.
视野检查——评估视野缺陷(VFD)——要求患者能够保持长时间的稳定固定,并通过运动反应提供反馈。这些方面限制了可测试的数量,并经常导致不准确的结果。我们假设不同的VFD会以系统的方式改变眼球运动,从而可以通过量化眼球运动的时空特性来推断VFD的存在。我们开发了一种跟踪测试来记录参与者的眼球运动,同时我们模拟了不同的注视条件下的VFD。我们测试了50名视力健康的参与者,并模拟了三种常见的盲点:外周丧失、中枢丧失和半视野丧失。我们使用交叉相关图分析量化时空特征,然后应用交叉验证来训练决策树算法对条件进行分类。我们的测试比标准的视野检查更快、更舒适,并且可以在不到2分钟的时间内获得数据,分类准确率达到90%(真阳性率= 98%)。
{"title":"Towards using the spatio-temporal properties of eye movements to classify visual field defects","authors":"A. Grillini, Daniel Ombelet, R. S. Soans, F. Cornelissen","doi":"10.1145/3204493.3204590","DOIUrl":"https://doi.org/10.1145/3204493.3204590","url":null,"abstract":"Perimetry---assessment of visual field defects (VFD)---requires patients to be able to maintain a prolonged stable fixation, as well as to provide feedback through motor response. These aspects limit the testable population and often lead to inaccurate results. We hypothesized that different VFD would alter the eye-movements in systematic ways, thus making it possible to infer the presence of VFD by quantifying the spatio-temporal properties of eye movements. We developed a tracking test to record participant's eye-movements while we simulated different gaze-contingent VFD. We tested 50 visually healthy participants and simulated three common scotomas: peripheral loss, central loss and hemifield loss. We quantified spatio-temporal features using cross-correlogram analysis, then applied cross-validation to train a decision tree algorithm to classify the conditions. Our test is faster and more comfortable than standard perimetry and can achieve a classifying accuracy of ∼90% (True Positive Rate = ∼98%) with data acquired in less than 2 minutes.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131920380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Leveraging eye-gaze and time-series features to predict user interests and build a recommendation model for visual analysis 利用眼球注视和时间序列特征来预测用户兴趣,并构建用于视觉分析的推荐模型
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204546
Nelson Silva, T. Schreck, Eduardo Veas, V. Sabol, E. Eggeling, D. Fellner
We developed a new concept to improve the efficiency of visual analysis through visual recommendations. It uses a novel eye-gaze based recommendation model that aids users in identifying interesting time-series patterns. Our model combines time-series features and eye-gaze interests, captured via an eye-tracker. Mouse selections are also considered. The system provides an overlay visualization with recommended patterns, and an eye-history graph, that supports the users in the data exploration process. We conducted an experiment with 5 tasks where 30 participants explored sensor data of a wind turbine. This work presents results on pre-attentive features, and discusses the precision/recall of our model in comparison to final selections made by users. Our model helps users to efficiently identify interesting time-series patterns.
我们开发了一个新的概念,通过视觉推荐来提高视觉分析的效率。它使用一种新颖的基于眼睛注视的推荐模型,帮助用户识别有趣的时间序列模式。我们的模型结合了时间序列特征和眼球注视兴趣,通过眼动仪捕获。还考虑了鼠标选择。该系统提供了推荐模式的叠加可视化和眼历史图,支持用户在数据探索过程中。我们进行了一个实验,有5个任务,30名参与者探索风力涡轮机的传感器数据。这项工作展示了预注意特征的结果,并讨论了与用户最终选择相比,我们模型的精度/召回率。我们的模型可以帮助用户有效地识别有趣的时间序列模式。
{"title":"Leveraging eye-gaze and time-series features to predict user interests and build a recommendation model for visual analysis","authors":"Nelson Silva, T. Schreck, Eduardo Veas, V. Sabol, E. Eggeling, D. Fellner","doi":"10.1145/3204493.3204546","DOIUrl":"https://doi.org/10.1145/3204493.3204546","url":null,"abstract":"We developed a new concept to improve the efficiency of visual analysis through visual recommendations. It uses a novel eye-gaze based recommendation model that aids users in identifying interesting time-series patterns. Our model combines time-series features and eye-gaze interests, captured via an eye-tracker. Mouse selections are also considered. The system provides an overlay visualization with recommended patterns, and an eye-history graph, that supports the users in the data exploration process. We conducted an experiment with 5 tasks where 30 participants explored sensor data of a wind turbine. This work presents results on pre-attentive features, and discusses the precision/recall of our model in comparison to final selections made by users. Our model helps users to efficiently identify interesting time-series patterns.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130368329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Robust marker tracking system for mapping mobile eye tracking data 用于映射移动眼动追踪数据的鲁棒标记跟踪系统
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208339
Iyad Aldaqre, Roberto Delfiore
One of the challenges of mobile eye tracking is mapping gaze data on a reference image of the stimulus. Here we present a marker-tracking system that relies on the scene-video, recorded by eye tracking glasses, to recognize and track markers and map gaze data on the reference image. Due to the simple nature of the markers employed, the current system works with low-quality videos and at long distances from the stimulus, allowing the use of mobile eye tracking in new situations.
移动眼动追踪的挑战之一是将注视数据映射到刺激物的参考图像上。在这里,我们提出了一个标记跟踪系统,该系统依靠眼动追踪眼镜记录的场景视频来识别和跟踪标记,并在参考图像上绘制注视数据。由于所使用的标记的简单性质,目前的系统可以在低质量的视频和距离刺激很远的地方工作,允许在新情况下使用移动眼动追踪。
{"title":"Robust marker tracking system for mapping mobile eye tracking data","authors":"Iyad Aldaqre, Roberto Delfiore","doi":"10.1145/3204493.3208339","DOIUrl":"https://doi.org/10.1145/3204493.3208339","url":null,"abstract":"One of the challenges of mobile eye tracking is mapping gaze data on a reference image of the stimulus. Here we present a marker-tracking system that relies on the scene-video, recorded by eye tracking glasses, to recognize and track markers and map gaze data on the reference image. Due to the simple nature of the markers employed, the current system works with low-quality videos and at long distances from the stimulus, allowing the use of mobile eye tracking in new situations.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130979992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A gaze gesture-based paradigm for situational impairments, accessibility, and rich interactions 基于注视手势的情境障碍、可访问性和丰富交互模式
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208344
Vijay Rajanna, T. Hammond
Gaze gesture-based interactions on a computer are promising, but the existing systems are limited by the number of supported gestures, recognition accuracy, need to remember the stroke order, lack of extensibility, and so on. We present a gaze gesture-based interaction framework where a user can design gestures and associate them to appropriate commands like minimize, maximize, scroll, and so on. This allows the user to interact with a wide range of applications using a common set of gestures. Furthermore, our gesture recognition algorithm is independent of the screen size, resolution, and the user can draw the gesture anywhere on the target application. Results from a user study involving seven participants showed that the system recognizes a set of nine gestures with an accuracy of 93% and a F-measure of 0.96. We envision, this framework can be leveraged in developing solutions for situational impairments, accessibility, and also for implementing rich a interaction paradigm.
计算机上基于注视手势的交互很有前景,但现有系统受到支持手势数量、识别准确性、需要记住笔画顺序、缺乏可扩展性等方面的限制。我们提出了一个基于注视手势的交互框架,用户可以在其中设计手势,并将它们与最小化、最大化、滚动等适当的命令相关联。这允许用户使用一组通用的手势与广泛的应用程序进行交互。此外,我们的手势识别算法与屏幕大小、分辨率无关,用户可以在目标应用程序的任何地方绘制手势。一项涉及7名参与者的用户研究结果显示,该系统识别9种手势的准确率为93%,f值为0.96。我们设想,这个框架可以用于开发针对情境障碍、可访问性以及实现富交互范式的解决方案。
{"title":"A gaze gesture-based paradigm for situational impairments, accessibility, and rich interactions","authors":"Vijay Rajanna, T. Hammond","doi":"10.1145/3204493.3208344","DOIUrl":"https://doi.org/10.1145/3204493.3208344","url":null,"abstract":"Gaze gesture-based interactions on a computer are promising, but the existing systems are limited by the number of supported gestures, recognition accuracy, need to remember the stroke order, lack of extensibility, and so on. We present a gaze gesture-based interaction framework where a user can design gestures and associate them to appropriate commands like minimize, maximize, scroll, and so on. This allows the user to interact with a wide range of applications using a common set of gestures. Furthermore, our gesture recognition algorithm is independent of the screen size, resolution, and the user can draw the gesture anywhere on the target application. Results from a user study involving seven participants showed that the system recognizes a set of nine gestures with an accuracy of 93% and a F-measure of 0.96. We envision, this framework can be leveraged in developing solutions for situational impairments, accessibility, and also for implementing rich a interaction paradigm.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132664465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Gaze and head pointing for hands-free text entry: applicability to ultra-small virtual keyboards 凝视和头指向免提文本输入:适用于超小型虚拟键盘
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204539
Y. Gizatdinova, O. Špakov, O. Tuisku, M. Turk, Veikko Surakka
With the proliferation of small-screen computing devices, there has been a continuous trend in reducing the size of interface elements. In virtual keyboards, this allows for more characters in a layout and additional function widgets. However, vision-based interfaces (VBIs) have only been investigated with large (e.g., full-screen) keyboards. To understand how key size reduction affects the accuracy and speed performance of text entry VBIs, we evaluated gaze-controlled VBI (g-VBI) and head-controlled VBI (h-VBI) with unconventionally small (0.4°, 0.6°, 0.8° and 1°) keys. Novices (N = 26) yielded significantly more accurate and fast text production with h-VBI than with g-VBI, while the performance of experts (N = 12) for both VBIs was nearly equal when a 0.8--1° key size was used. We discuss advantages and limitations of the VBIs for typing with ultra-small keyboards and emphasize relevant factors for designing such systems.
随着小屏幕计算设备的普及,减小界面元素的尺寸已经成为一种持续的趋势。在虚拟键盘中,这允许在布局中使用更多字符和额外的功能部件。然而,基于视觉的界面(vbi)只在大型(如全屏)键盘上进行了研究。为了了解键大小减小如何影响文本输入VBI的准确性和速度性能,我们评估了具有非常规小键(0.4°,0.6°,0.8°和1°)的凝视控制VBI (g-VBI)和头控VBI (h-VBI)。新手(N = 26)使用h-VBI比使用g-VBI产生更准确和快速的文本,而专家(N = 12)在使用0.8- 1°键大小时使用两种vbi的表现几乎相等。讨论了超小型键盘打字用vbi的优点和局限性,并强调了设计此类系统的相关因素。
{"title":"Gaze and head pointing for hands-free text entry: applicability to ultra-small virtual keyboards","authors":"Y. Gizatdinova, O. Špakov, O. Tuisku, M. Turk, Veikko Surakka","doi":"10.1145/3204493.3204539","DOIUrl":"https://doi.org/10.1145/3204493.3204539","url":null,"abstract":"With the proliferation of small-screen computing devices, there has been a continuous trend in reducing the size of interface elements. In virtual keyboards, this allows for more characters in a layout and additional function widgets. However, vision-based interfaces (VBIs) have only been investigated with large (e.g., full-screen) keyboards. To understand how key size reduction affects the accuracy and speed performance of text entry VBIs, we evaluated gaze-controlled VBI (g-VBI) and head-controlled VBI (h-VBI) with unconventionally small (0.4°, 0.6°, 0.8° and 1°) keys. Novices (N = 26) yielded significantly more accurate and fast text production with h-VBI than with g-VBI, while the performance of experts (N = 12) for both VBIs was nearly equal when a 0.8--1° key size was used. We discuss advantages and limitations of the VBIs for typing with ultra-small keyboards and emphasize relevant factors for designing such systems.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121715378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
PuReST 纯粹
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204578
Thiago Santini, Wolfgang Fuhl, Enkelejda Kasneci
Pervasive eye-tracking applications such as gaze-based human computer interaction and advanced driver assistance require real-time, accurate, and robust pupil detection. However, automated pupil detection has proved to be an intricate task in real-world scenarios due to a large mixture of challenges - for instance, quickly changing illumination and occlusions. In this work, we introduce the Pupil Reconstructor with Subsequent Tracking (PuReST), a novel method for fast and robust pupil tracking. The proposed method was evaluated on over 266,000 realistic and challenging images acquired with three distinct head-mounted eye tracking devices, increasing pupil detection rate by 5.44 and 29.92 percentage points while reducing average run time by a factor of 2.74 and 1.1. w.r.t. state-of-the-art 1) pupil detectors and 2) vendor provided pupil trackers, respectively. Overall, PuReST outperformed other methods in 81.82% of use cases.
{"title":"PuReST","authors":"Thiago Santini, Wolfgang Fuhl, Enkelejda Kasneci","doi":"10.1145/3204493.3204578","DOIUrl":"https://doi.org/10.1145/3204493.3204578","url":null,"abstract":"Pervasive eye-tracking applications such as gaze-based human computer interaction and advanced driver assistance require real-time, accurate, and robust pupil detection. However, automated pupil detection has proved to be an intricate task in real-world scenarios due to a large mixture of challenges - for instance, quickly changing illumination and occlusions. In this work, we introduce the Pupil Reconstructor with Subsequent Tracking (PuReST), a novel method for fast and robust pupil tracking. The proposed method was evaluated on over 266,000 realistic and challenging images acquired with three distinct head-mounted eye tracking devices, increasing pupil detection rate by 5.44 and 29.92 percentage points while reducing average run time by a factor of 2.74 and 1.1. w.r.t. state-of-the-art 1) pupil detectors and 2) vendor provided pupil trackers, respectively. Overall, PuReST outperformed other methods in 81.82% of use cases.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121845892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1