首页 > 最新文献

Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications最新文献

英文 中文
CBF CBF
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204559
Wolfgang Fuhl, D. Geisler, Thiago Santini, Tobias Appel, W. Rosenstiel, Enkelejda Kasneci
Modern eye tracking systems rely on fast and robust pupil detection, and several algorithms have been proposed for eye tracking under real world conditions. In this work, we propose a novel binary feature selection approach that is trained by computing conditional distributions. These features are scalable and rotatable, allowing for distinct image resolutions, and consist of simple intensity comparisons, making the approach robust to different illumination conditions as well as rapid illumination changes. The proposed method was evaluated on multiple publicly available data sets, considerably outperforming state-of-the-art methods, and being real-time capable for very high frame rates. Moreover, our method is designed to be able to sustain pupil center estimation even when typical edge-detection-based approaches fail - e.g., when the pupil outline is not visible due to occlusions from reflections or eye lids / lashes. As a consequece, it does not attempt to provide an estimate for the pupil outline. Nevertheless, the pupil center suffices for gaze estimation - e.g., by regressing the relationship between pupil center and gaze point during calibration.
{"title":"CBF","authors":"Wolfgang Fuhl, D. Geisler, Thiago Santini, Tobias Appel, W. Rosenstiel, Enkelejda Kasneci","doi":"10.1145/3204493.3204559","DOIUrl":"https://doi.org/10.1145/3204493.3204559","url":null,"abstract":"Modern eye tracking systems rely on fast and robust pupil detection, and several algorithms have been proposed for eye tracking under real world conditions. In this work, we propose a novel binary feature selection approach that is trained by computing conditional distributions. These features are scalable and rotatable, allowing for distinct image resolutions, and consist of simple intensity comparisons, making the approach robust to different illumination conditions as well as rapid illumination changes. The proposed method was evaluated on multiple publicly available data sets, considerably outperforming state-of-the-art methods, and being real-time capable for very high frame rates. Moreover, our method is designed to be able to sustain pupil center estimation even when typical edge-detection-based approaches fail - e.g., when the pupil outline is not visible due to occlusions from reflections or eye lids / lashes. As a consequece, it does not attempt to provide an estimate for the pupil outline. Nevertheless, the pupil center suffices for gaze estimation - e.g., by regressing the relationship between pupil center and gaze point during calibration.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115185177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Intelligent cockpit: eye tracking integration to enhance the pilot-aircraft interaction 智能座舱:眼动追踪集成,增强人机交互
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3207420
C. Lounis, Vsevolod Peysakhovich, M. Causse
In this research, we use eye tracking to monitor the attentional behavior of pilots in the cockpit. We built a cockpit monitoring database that serves as a reference for real-time assessment of the pilot's monitoring strategies, based on numerous flight simulator sessions with eye-tracking recordings. Eye tracking may also be employed as a passive input for assistive system, future studies will also explore the possibility to adapt the notifications' modality using gaze.
在本研究中,我们使用眼动追踪来监测驾驶员在驾驶舱中的注意力行为。我们建立了一个驾驶舱监控数据库,作为实时评估飞行员监控策略的参考,该数据库基于大量的飞行模拟器会话和眼球追踪记录。眼动追踪也可以作为辅助系统的被动输入,未来的研究还将探索使用凝视来调整通知方式的可能性。
{"title":"Intelligent cockpit: eye tracking integration to enhance the pilot-aircraft interaction","authors":"C. Lounis, Vsevolod Peysakhovich, M. Causse","doi":"10.1145/3204493.3207420","DOIUrl":"https://doi.org/10.1145/3204493.3207420","url":null,"abstract":"In this research, we use eye tracking to monitor the attentional behavior of pilots in the cockpit. We built a cockpit monitoring database that serves as a reference for real-time assessment of the pilot's monitoring strategies, based on numerous flight simulator sessions with eye-tracking recordings. Eye tracking may also be employed as a passive input for assistive system, future studies will also explore the possibility to adapt the notifications' modality using gaze.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117219904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Gaze and head pointing for hands-free text entry: applicability to ultra-small virtual keyboards 凝视和头指向免提文本输入:适用于超小型虚拟键盘
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204539
Y. Gizatdinova, O. Špakov, O. Tuisku, M. Turk, Veikko Surakka
With the proliferation of small-screen computing devices, there has been a continuous trend in reducing the size of interface elements. In virtual keyboards, this allows for more characters in a layout and additional function widgets. However, vision-based interfaces (VBIs) have only been investigated with large (e.g., full-screen) keyboards. To understand how key size reduction affects the accuracy and speed performance of text entry VBIs, we evaluated gaze-controlled VBI (g-VBI) and head-controlled VBI (h-VBI) with unconventionally small (0.4°, 0.6°, 0.8° and 1°) keys. Novices (N = 26) yielded significantly more accurate and fast text production with h-VBI than with g-VBI, while the performance of experts (N = 12) for both VBIs was nearly equal when a 0.8--1° key size was used. We discuss advantages and limitations of the VBIs for typing with ultra-small keyboards and emphasize relevant factors for designing such systems.
随着小屏幕计算设备的普及,减小界面元素的尺寸已经成为一种持续的趋势。在虚拟键盘中,这允许在布局中使用更多字符和额外的功能部件。然而,基于视觉的界面(vbi)只在大型(如全屏)键盘上进行了研究。为了了解键大小减小如何影响文本输入VBI的准确性和速度性能,我们评估了具有非常规小键(0.4°,0.6°,0.8°和1°)的凝视控制VBI (g-VBI)和头控VBI (h-VBI)。新手(N = 26)使用h-VBI比使用g-VBI产生更准确和快速的文本,而专家(N = 12)在使用0.8- 1°键大小时使用两种vbi的表现几乎相等。讨论了超小型键盘打字用vbi的优点和局限性,并强调了设计此类系统的相关因素。
{"title":"Gaze and head pointing for hands-free text entry: applicability to ultra-small virtual keyboards","authors":"Y. Gizatdinova, O. Špakov, O. Tuisku, M. Turk, Veikko Surakka","doi":"10.1145/3204493.3204539","DOIUrl":"https://doi.org/10.1145/3204493.3204539","url":null,"abstract":"With the proliferation of small-screen computing devices, there has been a continuous trend in reducing the size of interface elements. In virtual keyboards, this allows for more characters in a layout and additional function widgets. However, vision-based interfaces (VBIs) have only been investigated with large (e.g., full-screen) keyboards. To understand how key size reduction affects the accuracy and speed performance of text entry VBIs, we evaluated gaze-controlled VBI (g-VBI) and head-controlled VBI (h-VBI) with unconventionally small (0.4°, 0.6°, 0.8° and 1°) keys. Novices (N = 26) yielded significantly more accurate and fast text production with h-VBI than with g-VBI, while the performance of experts (N = 12) for both VBIs was nearly equal when a 0.8--1° key size was used. We discuss advantages and limitations of the VBIs for typing with ultra-small keyboards and emphasize relevant factors for designing such systems.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121715378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Image-based scanpath comparison with slit-scan visualization 基于图像的扫描路径与裂隙扫描可视化的比较
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204581
Maurice Koch, K. Kurzhals, D. Weiskopf
The comparison of scanpaths between multiple participants is an important analysis task in eye tracking research. Established methods typically inspect recorded gaze sequences based on geometrical trajectory properties or strings derived from annotated areas of interest (AOIs). We propose a new approach based on image similarities of gaze-guided slit-scans: For each time step, a vertical slice is extracted from the stimulus at the gaze position. Placing the slices next to each other over time creates a compact representation of a scanpath in the context of the stimulus. These visual representations can be compared based on their image similarity, providing a new measure for scanpath comparison without the need for annotation. We demonstrate how comparative slit-scan visualization can be integrated into a visual analytics approach to support the interpretation of scanpath similarities in general.
在眼动追踪研究中,多参与者扫描路径的比较是一项重要的分析任务。已建立的方法通常基于几何轨迹属性或从注释感兴趣区域(aoi)派生的字符串来检查记录的凝视序列。我们提出了一种基于注视引导的裂隙扫描图像相似性的新方法:对于每一个时间步,从注视位置的刺激中提取一个垂直切片。随着时间的推移,将这些切片彼此相邻放置,可以在刺激的背景下创建一个紧凑的扫描路径表示。这些视觉表示可以基于它们的图像相似性进行比较,为扫描路径比较提供了一种不需要注释的新方法。我们演示了如何将比较裂隙扫描可视化集成到视觉分析方法中,以支持对扫描路径相似性的解释。
{"title":"Image-based scanpath comparison with slit-scan visualization","authors":"Maurice Koch, K. Kurzhals, D. Weiskopf","doi":"10.1145/3204493.3204581","DOIUrl":"https://doi.org/10.1145/3204493.3204581","url":null,"abstract":"The comparison of scanpaths between multiple participants is an important analysis task in eye tracking research. Established methods typically inspect recorded gaze sequences based on geometrical trajectory properties or strings derived from annotated areas of interest (AOIs). We propose a new approach based on image similarities of gaze-guided slit-scans: For each time step, a vertical slice is extracted from the stimulus at the gaze position. Placing the slices next to each other over time creates a compact representation of a scanpath in the context of the stimulus. These visual representations can be compared based on their image similarity, providing a new measure for scanpath comparison without the need for annotation. We demonstrate how comparative slit-scan visualization can be integrated into a visual analytics approach to support the interpretation of scanpath similarities in general.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126435704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Contour-guided gaze gestures: using object contours as visual guidance for triggering interactions 轮廓引导凝视手势:使用物体轮廓作为触发交互的视觉引导
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204530
Florian Jungwirth, Michael Haslgrübler, A. Ferscha
The eyes are an interesting modality for pervasive interactions, though their applicability for mobile scenarios is restricted by several issues so far. In this paper, we propose the idea of contour-guided gaze gestures, which overcome former constraints, like the need for calibration, by relying on unnatural and relative eye movements, as users trace the contours of objects in order to trigger an interaction. The interaction concept and the system design are described, along with two user studies, that demonstrate the method's applicability. It is shown that users were able to trace object contours to trigger actions from various positions on multiple different objects. It is further determined, that the proposed method is an easy to learn, hands-free interaction technique, that is robust against false positive activations. Results highlight low demand values and show that the method holds potential for further exploration, but also reveal areas for refinement.
对于无处不在的交互,眼睛是一种有趣的方式,尽管到目前为止,它们在移动场景中的适用性受到几个问题的限制。在本文中,我们提出了轮廓引导凝视手势的想法,它克服了以前的限制,比如需要校准,依靠非自然和相对的眼球运动,当用户追踪物体的轮廓以触发交互时。描述了交互概念和系统设计,以及两个用户研究,证明了该方法的适用性。结果表明,用户能够跟踪物体轮廓,从而在多个不同物体的不同位置触发动作。进一步确定,所提出的方法是一种易于学习,免提的交互技术,对假阳性激活具有鲁棒性。结果突出了低需求值,表明该方法具有进一步探索的潜力,但也揭示了需要改进的领域。
{"title":"Contour-guided gaze gestures: using object contours as visual guidance for triggering interactions","authors":"Florian Jungwirth, Michael Haslgrübler, A. Ferscha","doi":"10.1145/3204493.3204530","DOIUrl":"https://doi.org/10.1145/3204493.3204530","url":null,"abstract":"The eyes are an interesting modality for pervasive interactions, though their applicability for mobile scenarios is restricted by several issues so far. In this paper, we propose the idea of contour-guided gaze gestures, which overcome former constraints, like the need for calibration, by relying on unnatural and relative eye movements, as users trace the contours of objects in order to trigger an interaction. The interaction concept and the system design are described, along with two user studies, that demonstrate the method's applicability. It is shown that users were able to trace object contours to trigger actions from various positions on multiple different objects. It is further determined, that the proposed method is an easy to learn, hands-free interaction technique, that is robust against false positive activations. Results highlight low demand values and show that the method holds potential for further exploration, but also reveal areas for refinement.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131363946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Rapid alternating saccade training 快速交替扫视训练
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204588
Brent D. Parsons, R. Ivry
While individual eye movement characteristics are remarkably stable, experiments on saccadic spatial adaptation indicate that oculomotor learning is possible. To further investigate saccadic learning, participants received veridical feedback on saccade rate while making sequential saccades as quickly as possible between two horizontal targets. Over the course of five days, with just ten minutes of training per day, participants were able to significantly increase the rate of sequential saccades. This occurred through both a reduction in dwell duration and to changes in secondary saccade characteristics. There was no concomitant change in participant's accuracy or precision. The learning was retained following the training and generalized to saccades of different directions, and to reaction time measures during a delayed saccade task. The study provides evidence for a novel form of saccadic learning with applicability in a number of domains.
虽然个体眼球运动特征非常稳定,但跳眼空间适应实验表明动眼肌学习是可能的。为了进一步研究扫视学习,参与者在两个水平目标之间尽可能快地进行连续扫视时,获得了关于扫视率的真实反馈。在5天的训练过程中,每天只训练10分钟,参与者能够显著提高连续扫视的速度。这是通过停留时间的减少和二次扫视特征的变化而发生的。在参与者的准确度和精密度上没有相应的变化。训练后的学习被保留下来,并推广到不同方向的扫视,以及延迟扫视任务中的反应时间测量。该研究为跳跃式学习的新形式在许多领域的适用性提供了证据。
{"title":"Rapid alternating saccade training","authors":"Brent D. Parsons, R. Ivry","doi":"10.1145/3204493.3204588","DOIUrl":"https://doi.org/10.1145/3204493.3204588","url":null,"abstract":"While individual eye movement characteristics are remarkably stable, experiments on saccadic spatial adaptation indicate that oculomotor learning is possible. To further investigate saccadic learning, participants received veridical feedback on saccade rate while making sequential saccades as quickly as possible between two horizontal targets. Over the course of five days, with just ten minutes of training per day, participants were able to significantly increase the rate of sequential saccades. This occurred through both a reduction in dwell duration and to changes in secondary saccade characteristics. There was no concomitant change in participant's accuracy or precision. The learning was retained following the training and generalized to saccades of different directions, and to reaction time measures during a delayed saccade task. The study provides evidence for a novel form of saccadic learning with applicability in a number of domains.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130524101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Head and gaze control of a telepresence robot with an HMD 带头戴式显示器的远程呈现机器人的头部和凝视控制
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208330
J. P. Hansen, A. Alapetite, Martin Thomsen, Zhongyu Wang, Katsumi Minakata, Guangtao Zhang
Gaze interaction with telerobots is a new opportunity for wheelchair users with severe motor disabilities. We present a video showing how head-mounted displays (HMD) with gaze tracking can be used to monitor a robot that carries a 360° video camera and a microphone. Our interface supports autonomous driving via way-points on a map, along with gaze-controlled steering and gaze typing. It is implemented with Unity, which communicates with the Robot Operating System (ROS).
对于有严重运动障碍的轮椅使用者来说,与遥控机器人的凝视互动是一个新的机会。我们展示了一段视频,展示了如何使用带有凝视跟踪功能的头戴式显示器(HMD)来监控一个携带360°摄像机和麦克风的机器人。我们的界面支持通过地图上的路径点自动驾驶,以及凝视控制的转向和凝视输入。它是用Unity实现的,它与机器人操作系统(ROS)通信。
{"title":"Head and gaze control of a telepresence robot with an HMD","authors":"J. P. Hansen, A. Alapetite, Martin Thomsen, Zhongyu Wang, Katsumi Minakata, Guangtao Zhang","doi":"10.1145/3204493.3208330","DOIUrl":"https://doi.org/10.1145/3204493.3208330","url":null,"abstract":"Gaze interaction with telerobots is a new opportunity for wheelchair users with severe motor disabilities. We present a video showing how head-mounted displays (HMD) with gaze tracking can be used to monitor a robot that carries a 360° video camera and a microphone. Our interface supports autonomous driving via way-points on a map, along with gaze-controlled steering and gaze typing. It is implemented with Unity, which communicates with the Robot Operating System (ROS).","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134145198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Comparison of mapping algorithms for implicit calibration using probable fixation targets 使用可能固定目标的隐式校准映射算法的比较
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204529
P. Kasprowski, Katarzyna Harężlak
With growing access to cheap low end eye trackers using simple web cameras, there is also a growing demand on easy and fast usage of this devices by untrained and unsupervised end users. For such users the necessity to calibrate the eye tracker prior to its first usage is often perceived as obtrusive and inconvenient. In the same time perfect accuracy is not necessary for many commercial applications. Therefore, the idea of implicit calibration attracts more and more attention. Algorithms for implicit calibration are able to calibrate the device without any active collaboration with users. Especially, a real time implicit calibration, that is able to calibrate a device on-the-fly, while a person uses an eye tracker, seems to be a reasonable solution to the aforementioned problems. The paper presents examples of implicit calibration algorithms (including their real time versions) based on the idea of probable fixation targets (PFT). The algorithms were tested during a free viewing experiment and compared to the state of the art PFT based algorithm and explicit calibration results.
随着使用简单网络摄像头的廉价低端眼动仪越来越多,未经培训和无人监督的终端用户也越来越需要轻松快速地使用这些设备。对于这些用户来说,在第一次使用眼动仪之前校准眼动仪的必要性通常被认为是突兀和不方便的。同时,对于许多商业应用来说,完美的精度是不必要的。因此,隐式标定的思想越来越受到人们的关注。隐式校准算法能够校准设备,而无需与用户进行任何主动协作。特别是,当一个人使用眼动仪时,能够动态校准设备的实时隐式校准似乎是上述问题的合理解决方案。本文介绍了基于可能固定目标(PFT)思想的隐式校准算法(包括其实时版本)的示例。在自由观测实验中对算法进行了测试,并与基于PFT的最先进算法和显式校准结果进行了比较。
{"title":"Comparison of mapping algorithms for implicit calibration using probable fixation targets","authors":"P. Kasprowski, Katarzyna Harężlak","doi":"10.1145/3204493.3204529","DOIUrl":"https://doi.org/10.1145/3204493.3204529","url":null,"abstract":"With growing access to cheap low end eye trackers using simple web cameras, there is also a growing demand on easy and fast usage of this devices by untrained and unsupervised end users. For such users the necessity to calibrate the eye tracker prior to its first usage is often perceived as obtrusive and inconvenient. In the same time perfect accuracy is not necessary for many commercial applications. Therefore, the idea of implicit calibration attracts more and more attention. Algorithms for implicit calibration are able to calibrate the device without any active collaboration with users. Especially, a real time implicit calibration, that is able to calibrate a device on-the-fly, while a person uses an eye tracker, seems to be a reasonable solution to the aforementioned problems. The paper presents examples of implicit calibration algorithms (including their real time versions) based on the idea of probable fixation targets (PFT). The algorithms were tested during a free viewing experiment and compared to the state of the art PFT based algorithm and explicit calibration results.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134274016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Binocular model-based gaze estimation with a camera and a single infrared light source 基于摄像机和单红外光源双目模型的凝视估计
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204557
Laura Sesma, D. W. Hansen
We propose a binocular model-based method that only uses a single camera and an infrared light source. Most gaze estimation approaches are based on single eye models and with binocular models they are addressed by averaging the results from each eye. In this work, we propose a geometric model of both eyes for gaze estimation. The proposed model is implemented and evaluated in a simulated environment and is compared to a binocular model-based method and polynomial regression-based method with one camera and two infrared lights that average the results from both eyes. The method performs on par with methods using multiple light sources while maintaining robustness to head movements. The study shows that when using both eyes in gaze estimation models it is possible to reduce the hardware requirements while maintaining robustness.
我们提出了一种基于双目模型的方法,该方法仅使用单个摄像机和红外光源。大多数注视估计方法都是基于单眼模型的,对于双眼模型,它们通过平均每只眼睛的结果来解决。在这项工作中,我们提出了一个双眼的几何模型来估计凝视。该模型在模拟环境中进行了实现和评估,并与基于双目模型的方法和基于多项式回归的方法进行了比较,该方法使用一个摄像机和两个红外光对双眼结果进行平均。该方法的性能与使用多个光源的方法相当,同时保持头部运动的稳健性。研究表明,在使用双眼进行注视估计模型时,可以在保持鲁棒性的同时减少硬件需求。
{"title":"Binocular model-based gaze estimation with a camera and a single infrared light source","authors":"Laura Sesma, D. W. Hansen","doi":"10.1145/3204493.3204557","DOIUrl":"https://doi.org/10.1145/3204493.3204557","url":null,"abstract":"We propose a binocular model-based method that only uses a single camera and an infrared light source. Most gaze estimation approaches are based on single eye models and with binocular models they are addressed by averaging the results from each eye. In this work, we propose a geometric model of both eyes for gaze estimation. The proposed model is implemented and evaluated in a simulated environment and is compared to a binocular model-based method and polynomial regression-based method with one camera and two infrared lights that average the results from both eyes. The method performs on par with methods using multiple light sources while maintaining robustness to head movements. The study shows that when using both eyes in gaze estimation models it is possible to reduce the hardware requirements while maintaining robustness.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116377483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
New features of scangraph: a tool for revealing participants' strategy from eye-movement data 扫描仪的新功能:一种从眼球运动数据中揭示参与者策略的工具
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208334
S. Popelka, J. Dolezalová, Marketa Beitlova
The demo describes new features of ScanGraph, an application intended for a finding of participants with a similar stimulus reading strategy based on the sequences of visited Areas of Interest. The result is visualised using cliques of a simple graph. ScanGraph was initially introduced in 2016. Since the original publication, new features were added. First of them is the implementation of Damerau-Levenshtein algorithm for similarity calculation. A heuristic algorithm for cliques finding used in the original version was replaced by the Bron-Kerbosch algorithm. ScanGraph reads data from open-source application OGAMA, and with the use of conversion tool also data from SMI BeGaze, which allows analysing dynamic stimuli as well. The most prominent enhancement is the possibility of similarity calculation among participants not only for a single stimulus but for multiple files at once.
该演示描述了ScanGraph的新功能,ScanGraph是一个应用程序,旨在根据访问过的兴趣区域的顺序,发现具有类似刺激阅读策略的参与者。结果使用简单图形的团来可视化。ScanGraph最初于2016年推出。自最初出版以来,增加了新的特性。首先是实现了用于相似度计算的Damerau-Levenshtein算法。用brown - kerbosch算法代替了原始版本中用于查找派系的启发式算法。ScanGraph从开源应用程序OGAMA读取数据,并使用转换工具从SMI BeGaze读取数据,这也允许分析动态刺激。最显著的增强是参与者之间的相似性计算的可能性,不仅针对单一刺激,而且同时针对多个文件。
{"title":"New features of scangraph: a tool for revealing participants' strategy from eye-movement data","authors":"S. Popelka, J. Dolezalová, Marketa Beitlova","doi":"10.1145/3204493.3208334","DOIUrl":"https://doi.org/10.1145/3204493.3208334","url":null,"abstract":"The demo describes new features of ScanGraph, an application intended for a finding of participants with a similar stimulus reading strategy based on the sequences of visited Areas of Interest. The result is visualised using cliques of a simple graph. ScanGraph was initially introduced in 2016. Since the original publication, new features were added. First of them is the implementation of Damerau-Levenshtein algorithm for similarity calculation. A heuristic algorithm for cliques finding used in the original version was replaced by the Bron-Kerbosch algorithm. ScanGraph reads data from open-source application OGAMA, and with the use of conversion tool also data from SMI BeGaze, which allows analysing dynamic stimuli as well. The most prominent enhancement is the possibility of similarity calculation among participants not only for a single stimulus but for multiple files at once.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115254104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1