首页 > 最新文献

Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications最新文献

英文 中文
SLAM-based localization of 3D gaze using a mobile eye tracker 基于slam的移动眼动仪3D注视定位
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204584
Haofei Wang, Jimin Pi, Tong Qin, S. Shen, Bertram E. Shi
Past work in eye tracking has focused on estimating gaze targets in two dimensions (2D), e.g. on a computer screen or scene camera image. Three-dimensional (3D) gaze estimates would be extremely useful when humans are mobile and interacting with the real 3D environment. We describe a system for estimating the 3D locations of gaze using a mobile eye tracker. The system integrates estimates of the user's gaze vector from a mobile eye tracker, estimates of the eye tracker pose from a visual-inertial simultaneous localization and mapping (SLAM) algorithm, a 3D point cloud map of the environment from a RGB-D sensor. Experimental results indicate that our system produces accurate estimates of 3D gaze over a much larger range than remote eye trackers. Our system will enable applications, such as the analysis of 3D human attention and more anticipative human robot interfaces.
过去的眼动追踪工作主要集中在估计二维(2D)的凝视目标,例如计算机屏幕或场景摄像机图像。当人类移动并与真实的3D环境互动时,三维(3D)凝视估计将非常有用。我们描述了一个使用移动眼动仪估计凝视的3D位置的系统。该系统集成了来自移动眼动仪的用户凝视矢量估计,来自视觉惯性同步定位和映射(SLAM)算法的眼动仪姿态估计,以及来自RGB-D传感器的环境3D点云图。实验结果表明,与远程眼动仪相比,我们的系统可以在更大的范围内准确估计3D凝视。我们的系统将使应用程序,如三维人类注意力的分析和更预期的人机界面。
{"title":"SLAM-based localization of 3D gaze using a mobile eye tracker","authors":"Haofei Wang, Jimin Pi, Tong Qin, S. Shen, Bertram E. Shi","doi":"10.1145/3204493.3204584","DOIUrl":"https://doi.org/10.1145/3204493.3204584","url":null,"abstract":"Past work in eye tracking has focused on estimating gaze targets in two dimensions (2D), e.g. on a computer screen or scene camera image. Three-dimensional (3D) gaze estimates would be extremely useful when humans are mobile and interacting with the real 3D environment. We describe a system for estimating the 3D locations of gaze using a mobile eye tracker. The system integrates estimates of the user's gaze vector from a mobile eye tracker, estimates of the eye tracker pose from a visual-inertial simultaneous localization and mapping (SLAM) algorithm, a 3D point cloud map of the environment from a RGB-D sensor. Experimental results indicate that our system produces accurate estimates of 3D gaze over a much larger range than remote eye trackers. Our system will enable applications, such as the analysis of 3D human attention and more anticipative human robot interfaces.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127245955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Leveraging eye-gaze and time-series features to predict user interests and build a recommendation model for visual analysis 利用眼球注视和时间序列特征来预测用户兴趣,并构建用于视觉分析的推荐模型
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204546
Nelson Silva, T. Schreck, Eduardo Veas, V. Sabol, E. Eggeling, D. Fellner
We developed a new concept to improve the efficiency of visual analysis through visual recommendations. It uses a novel eye-gaze based recommendation model that aids users in identifying interesting time-series patterns. Our model combines time-series features and eye-gaze interests, captured via an eye-tracker. Mouse selections are also considered. The system provides an overlay visualization with recommended patterns, and an eye-history graph, that supports the users in the data exploration process. We conducted an experiment with 5 tasks where 30 participants explored sensor data of a wind turbine. This work presents results on pre-attentive features, and discusses the precision/recall of our model in comparison to final selections made by users. Our model helps users to efficiently identify interesting time-series patterns.
我们开发了一个新的概念,通过视觉推荐来提高视觉分析的效率。它使用一种新颖的基于眼睛注视的推荐模型,帮助用户识别有趣的时间序列模式。我们的模型结合了时间序列特征和眼球注视兴趣,通过眼动仪捕获。还考虑了鼠标选择。该系统提供了推荐模式的叠加可视化和眼历史图,支持用户在数据探索过程中。我们进行了一个实验,有5个任务,30名参与者探索风力涡轮机的传感器数据。这项工作展示了预注意特征的结果,并讨论了与用户最终选择相比,我们模型的精度/召回率。我们的模型可以帮助用户有效地识别有趣的时间序列模式。
{"title":"Leveraging eye-gaze and time-series features to predict user interests and build a recommendation model for visual analysis","authors":"Nelson Silva, T. Schreck, Eduardo Veas, V. Sabol, E. Eggeling, D. Fellner","doi":"10.1145/3204493.3204546","DOIUrl":"https://doi.org/10.1145/3204493.3204546","url":null,"abstract":"We developed a new concept to improve the efficiency of visual analysis through visual recommendations. It uses a novel eye-gaze based recommendation model that aids users in identifying interesting time-series patterns. Our model combines time-series features and eye-gaze interests, captured via an eye-tracker. Mouse selections are also considered. The system provides an overlay visualization with recommended patterns, and an eye-history graph, that supports the users in the data exploration process. We conducted an experiment with 5 tasks where 30 participants explored sensor data of a wind turbine. This work presents results on pre-attentive features, and discusses the precision/recall of our model in comparison to final selections made by users. Our model helps users to efficiently identify interesting time-series patterns.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130368329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications 2018年ACM眼动追踪研究与应用研讨会论文集
{"title":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","authors":"","doi":"10.1145/3204493","DOIUrl":"https://doi.org/10.1145/3204493","url":null,"abstract":"","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133436690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Eye-tracking measures in audiovisual stimuli in infants at high genetic risk for ASD: challenging issues ASD高遗传风险婴儿的视听刺激眼球追踪措施:具有挑战性的问题
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3207423
Itziar Lozano, R. Campos, M. Belinchón
Individuals with autism spectrum disorder (ASD) have shown difficulties to integrate auditory and visual sensory modalities. Here we aim to explore if very young infants at genetic risk of ASD show atypicalities in this ability early in development. We registered visual attention of 4-month-old infants in a task using audiovisual natural stimuli (speaking faces). The complexity of this information and the attentional features of this population, among others, involves a great amount of challenges regarding data quality obtained with an eye-tracker. Here we discuss some of them and draw possible solutions.
患有自闭症谱系障碍(ASD)的个体在整合听觉和视觉感觉模式方面表现出困难。在这里,我们的目的是探索具有自闭症遗传风险的非常年幼的婴儿在发育早期是否表现出这种能力的非典型。我们在一项使用视听自然刺激(说话的面孔)的任务中记录了4个月大的婴儿的视觉注意力。这些信息的复杂性和这一人群的注意力特征,以及其他方面,涉及到使用眼动仪获得的数据质量方面的大量挑战。在这里,我们讨论其中的一些问题,并提出可能的解决方案。
{"title":"Eye-tracking measures in audiovisual stimuli in infants at high genetic risk for ASD: challenging issues","authors":"Itziar Lozano, R. Campos, M. Belinchón","doi":"10.1145/3204493.3207423","DOIUrl":"https://doi.org/10.1145/3204493.3207423","url":null,"abstract":"Individuals with autism spectrum disorder (ASD) have shown difficulties to integrate auditory and visual sensory modalities. Here we aim to explore if very young infants at genetic risk of ASD show atypicalities in this ability early in development. We registered visual attention of 4-month-old infants in a task using audiovisual natural stimuli (speaking faces). The complexity of this information and the attentional features of this population, among others, involves a great amount of challenges regarding data quality obtained with an eye-tracker. Here we discuss some of them and draw possible solutions.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133077032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A gaze gesture-based paradigm for situational impairments, accessibility, and rich interactions 基于注视手势的情境障碍、可访问性和丰富交互模式
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208344
Vijay Rajanna, T. Hammond
Gaze gesture-based interactions on a computer are promising, but the existing systems are limited by the number of supported gestures, recognition accuracy, need to remember the stroke order, lack of extensibility, and so on. We present a gaze gesture-based interaction framework where a user can design gestures and associate them to appropriate commands like minimize, maximize, scroll, and so on. This allows the user to interact with a wide range of applications using a common set of gestures. Furthermore, our gesture recognition algorithm is independent of the screen size, resolution, and the user can draw the gesture anywhere on the target application. Results from a user study involving seven participants showed that the system recognizes a set of nine gestures with an accuracy of 93% and a F-measure of 0.96. We envision, this framework can be leveraged in developing solutions for situational impairments, accessibility, and also for implementing rich a interaction paradigm.
计算机上基于注视手势的交互很有前景,但现有系统受到支持手势数量、识别准确性、需要记住笔画顺序、缺乏可扩展性等方面的限制。我们提出了一个基于注视手势的交互框架,用户可以在其中设计手势,并将它们与最小化、最大化、滚动等适当的命令相关联。这允许用户使用一组通用的手势与广泛的应用程序进行交互。此外,我们的手势识别算法与屏幕大小、分辨率无关,用户可以在目标应用程序的任何地方绘制手势。一项涉及7名参与者的用户研究结果显示,该系统识别9种手势的准确率为93%,f值为0.96。我们设想,这个框架可以用于开发针对情境障碍、可访问性以及实现富交互范式的解决方案。
{"title":"A gaze gesture-based paradigm for situational impairments, accessibility, and rich interactions","authors":"Vijay Rajanna, T. Hammond","doi":"10.1145/3204493.3208344","DOIUrl":"https://doi.org/10.1145/3204493.3208344","url":null,"abstract":"Gaze gesture-based interactions on a computer are promising, but the existing systems are limited by the number of supported gestures, recognition accuracy, need to remember the stroke order, lack of extensibility, and so on. We present a gaze gesture-based interaction framework where a user can design gestures and associate them to appropriate commands like minimize, maximize, scroll, and so on. This allows the user to interact with a wide range of applications using a common set of gestures. Furthermore, our gesture recognition algorithm is independent of the screen size, resolution, and the user can draw the gesture anywhere on the target application. Results from a user study involving seven participants showed that the system recognizes a set of nine gestures with an accuracy of 93% and a F-measure of 0.96. We envision, this framework can be leveraged in developing solutions for situational impairments, accessibility, and also for implementing rich a interaction paradigm.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132664465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Robust marker tracking system for mapping mobile eye tracking data 用于映射移动眼动追踪数据的鲁棒标记跟踪系统
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208339
Iyad Aldaqre, Roberto Delfiore
One of the challenges of mobile eye tracking is mapping gaze data on a reference image of the stimulus. Here we present a marker-tracking system that relies on the scene-video, recorded by eye tracking glasses, to recognize and track markers and map gaze data on the reference image. Due to the simple nature of the markers employed, the current system works with low-quality videos and at long distances from the stimulus, allowing the use of mobile eye tracking in new situations.
移动眼动追踪的挑战之一是将注视数据映射到刺激物的参考图像上。在这里,我们提出了一个标记跟踪系统,该系统依靠眼动追踪眼镜记录的场景视频来识别和跟踪标记,并在参考图像上绘制注视数据。由于所使用的标记的简单性质,目前的系统可以在低质量的视频和距离刺激很远的地方工作,允许在新情况下使用移动眼动追踪。
{"title":"Robust marker tracking system for mapping mobile eye tracking data","authors":"Iyad Aldaqre, Roberto Delfiore","doi":"10.1145/3204493.3208339","DOIUrl":"https://doi.org/10.1145/3204493.3208339","url":null,"abstract":"One of the challenges of mobile eye tracking is mapping gaze data on a reference image of the stimulus. Here we present a marker-tracking system that relies on the scene-video, recorded by eye tracking glasses, to recognize and track markers and map gaze data on the reference image. Due to the simple nature of the markers employed, the current system works with low-quality videos and at long distances from the stimulus, allowing the use of mobile eye tracking in new situations.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130979992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards using the spatio-temporal properties of eye movements to classify visual field defects 利用眼球运动的时空特性对视野缺陷进行分类的研究
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204590
A. Grillini, Daniel Ombelet, R. S. Soans, F. Cornelissen
Perimetry---assessment of visual field defects (VFD)---requires patients to be able to maintain a prolonged stable fixation, as well as to provide feedback through motor response. These aspects limit the testable population and often lead to inaccurate results. We hypothesized that different VFD would alter the eye-movements in systematic ways, thus making it possible to infer the presence of VFD by quantifying the spatio-temporal properties of eye movements. We developed a tracking test to record participant's eye-movements while we simulated different gaze-contingent VFD. We tested 50 visually healthy participants and simulated three common scotomas: peripheral loss, central loss and hemifield loss. We quantified spatio-temporal features using cross-correlogram analysis, then applied cross-validation to train a decision tree algorithm to classify the conditions. Our test is faster and more comfortable than standard perimetry and can achieve a classifying accuracy of ∼90% (True Positive Rate = ∼98%) with data acquired in less than 2 minutes.
视野检查——评估视野缺陷(VFD)——要求患者能够保持长时间的稳定固定,并通过运动反应提供反馈。这些方面限制了可测试的数量,并经常导致不准确的结果。我们假设不同的VFD会以系统的方式改变眼球运动,从而可以通过量化眼球运动的时空特性来推断VFD的存在。我们开发了一种跟踪测试来记录参与者的眼球运动,同时我们模拟了不同的注视条件下的VFD。我们测试了50名视力健康的参与者,并模拟了三种常见的盲点:外周丧失、中枢丧失和半视野丧失。我们使用交叉相关图分析量化时空特征,然后应用交叉验证来训练决策树算法对条件进行分类。我们的测试比标准的视野检查更快、更舒适,并且可以在不到2分钟的时间内获得数据,分类准确率达到90%(真阳性率= 98%)。
{"title":"Towards using the spatio-temporal properties of eye movements to classify visual field defects","authors":"A. Grillini, Daniel Ombelet, R. S. Soans, F. Cornelissen","doi":"10.1145/3204493.3204590","DOIUrl":"https://doi.org/10.1145/3204493.3204590","url":null,"abstract":"Perimetry---assessment of visual field defects (VFD)---requires patients to be able to maintain a prolonged stable fixation, as well as to provide feedback through motor response. These aspects limit the testable population and often lead to inaccurate results. We hypothesized that different VFD would alter the eye-movements in systematic ways, thus making it possible to infer the presence of VFD by quantifying the spatio-temporal properties of eye movements. We developed a tracking test to record participant's eye-movements while we simulated different gaze-contingent VFD. We tested 50 visually healthy participants and simulated three common scotomas: peripheral loss, central loss and hemifield loss. We quantified spatio-temporal features using cross-correlogram analysis, then applied cross-validation to train a decision tree algorithm to classify the conditions. Our test is faster and more comfortable than standard perimetry and can achieve a classifying accuracy of ∼90% (True Positive Rate = ∼98%) with data acquired in less than 2 minutes.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131920380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Systematic shifts of fixation disparity accompanying brightness changes 视差随亮度变化而发生系统的变化
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204587
A. Huckauf
Video-based gaze tracking is prone to brightness changes due to their effects on pupil size. Monocular observations indeed confirm variable fixation locations depending on brightness. In close viewing, pupil size is coupled with accommodation and vergence, the so-called near triad. Hence, systematic changes in fixation disparity might be expected to co-occur with varying pupil size. In the current experiment, fixation disparity was assessed. Calibration was conducted either on dark or on bright background, and text had to be read on both backgrounds, on a self-illuminating screen and on paper. When calibration background matches background during reading, mean fixation disparity did not differ from zero. In the non-calibrated conditions, however, a brighter stimulus went along with a dominance of crossed fixations and vice versa. The data demonstrate that systematic changes in fixation disparity occur as effect of brightness changes advising for careful setting calibration parameters.
基于视频的注视跟踪由于瞳孔大小的影响,容易出现亮度变化。单目观察确实证实了不同的固定位置取决于亮度。在近距离观察时,瞳孔大小与调节和收敛度有关,即所谓的近三合一。因此,注视视差的系统性变化可能与瞳孔大小的变化同时发生。本实验对注视视差进行了评估。校准在黑暗或明亮的背景下进行,文本必须在两种背景下阅读,在自发光屏幕上和在纸上。读数时,校正背景与背景匹配时,平均注视视差不为零。然而,在非校准条件下,更亮的刺激伴随着交叉注视的优势,反之亦然。数据表明,视差的系统性变化是由于亮度变化的影响,建议仔细设置校准参数。
{"title":"Systematic shifts of fixation disparity accompanying brightness changes","authors":"A. Huckauf","doi":"10.1145/3204493.3204587","DOIUrl":"https://doi.org/10.1145/3204493.3204587","url":null,"abstract":"Video-based gaze tracking is prone to brightness changes due to their effects on pupil size. Monocular observations indeed confirm variable fixation locations depending on brightness. In close viewing, pupil size is coupled with accommodation and vergence, the so-called near triad. Hence, systematic changes in fixation disparity might be expected to co-occur with varying pupil size. In the current experiment, fixation disparity was assessed. Calibration was conducted either on dark or on bright background, and text had to be read on both backgrounds, on a self-illuminating screen and on paper. When calibration background matches background during reading, mean fixation disparity did not differ from zero. In the non-calibrated conditions, however, a brighter stimulus went along with a dominance of crossed fixations and vice versa. The data demonstrate that systematic changes in fixation disparity occur as effect of brightness changes advising for careful setting calibration parameters.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129453080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Use of attentive information dashboards to support task resumption in working environments 使用细心的信息仪表板来支持工作环境中的任务恢复
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208348
Peyman Toreini, Moritz Langner, A. Maedche
Interruptions are known as one of the big challenges in working environments. Due to improper resuming the primary task, such interruptions may result in task resumption failures and negatively influence the task performance. This phenomenon also occurs when users are working with information dashboards in working environments. To address this problem, an attentive dashboard issuing visual feedback is developed. This feedback supports the user in resuming the primary task after the interruption by guiding the visual attention. The attentive dashboard captures visual attention allocation of the user with a low-cost screen-based eye-tracker while they are monitoring the graphs. This dashboard is sensitive to the occurrence of external interruption by tracking the eye-movement data in real-time. Moreover, based on the collected eye-movement data, two types of visual feedback are designed which highlight the last fixated graph and unnoticed ones.
被打断是工作环境中最大的挑战之一。由于恢复主任务不当,可能导致任务恢复失败,影响任务性能。当用户在工作环境中使用信息仪表板时,也会出现这种现象。为了解决这个问题,开发了一个细心的仪表板,发出视觉反馈。这种反馈通过引导视觉注意力来支持用户在中断后恢复主要任务。当用户监控图表时,注意力仪表板通过低成本的屏幕眼动仪捕捉用户的视觉注意力分配。该仪表板通过实时跟踪眼球运动数据,对外部干扰的发生非常敏感。此外,基于采集到的眼球运动数据,设计了两种视觉反馈,分别突出最后注视的图形和未注意到的图形。
{"title":"Use of attentive information dashboards to support task resumption in working environments","authors":"Peyman Toreini, Moritz Langner, A. Maedche","doi":"10.1145/3204493.3208348","DOIUrl":"https://doi.org/10.1145/3204493.3208348","url":null,"abstract":"Interruptions are known as one of the big challenges in working environments. Due to improper resuming the primary task, such interruptions may result in task resumption failures and negatively influence the task performance. This phenomenon also occurs when users are working with information dashboards in working environments. To address this problem, an attentive dashboard issuing visual feedback is developed. This feedback supports the user in resuming the primary task after the interruption by guiding the visual attention. The attentive dashboard captures visual attention allocation of the user with a low-cost screen-based eye-tracker while they are monitoring the graphs. This dashboard is sensitive to the occurrence of external interruption by tracking the eye-movement data in real-time. Moreover, based on the collected eye-movement data, two types of visual feedback are designed which highlight the last fixated graph and unnoticed ones.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122551532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
PuReST 纯粹
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204578
Thiago Santini, Wolfgang Fuhl, Enkelejda Kasneci
Pervasive eye-tracking applications such as gaze-based human computer interaction and advanced driver assistance require real-time, accurate, and robust pupil detection. However, automated pupil detection has proved to be an intricate task in real-world scenarios due to a large mixture of challenges - for instance, quickly changing illumination and occlusions. In this work, we introduce the Pupil Reconstructor with Subsequent Tracking (PuReST), a novel method for fast and robust pupil tracking. The proposed method was evaluated on over 266,000 realistic and challenging images acquired with three distinct head-mounted eye tracking devices, increasing pupil detection rate by 5.44 and 29.92 percentage points while reducing average run time by a factor of 2.74 and 1.1. w.r.t. state-of-the-art 1) pupil detectors and 2) vendor provided pupil trackers, respectively. Overall, PuReST outperformed other methods in 81.82% of use cases.
{"title":"PuReST","authors":"Thiago Santini, Wolfgang Fuhl, Enkelejda Kasneci","doi":"10.1145/3204493.3204578","DOIUrl":"https://doi.org/10.1145/3204493.3204578","url":null,"abstract":"Pervasive eye-tracking applications such as gaze-based human computer interaction and advanced driver assistance require real-time, accurate, and robust pupil detection. However, automated pupil detection has proved to be an intricate task in real-world scenarios due to a large mixture of challenges - for instance, quickly changing illumination and occlusions. In this work, we introduce the Pupil Reconstructor with Subsequent Tracking (PuReST), a novel method for fast and robust pupil tracking. The proposed method was evaluated on over 266,000 realistic and challenging images acquired with three distinct head-mounted eye tracking devices, increasing pupil detection rate by 5.44 and 29.92 percentage points while reducing average run time by a factor of 2.74 and 1.1. w.r.t. state-of-the-art 1) pupil detectors and 2) vendor provided pupil trackers, respectively. Overall, PuReST outperformed other methods in 81.82% of use cases.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121845892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1