首页 > 最新文献

Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications最新文献

英文 中文
Binocular model-based gaze estimation with a camera and a single infrared light source 基于摄像机和单红外光源双目模型的凝视估计
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204557
Laura Sesma, D. W. Hansen
We propose a binocular model-based method that only uses a single camera and an infrared light source. Most gaze estimation approaches are based on single eye models and with binocular models they are addressed by averaging the results from each eye. In this work, we propose a geometric model of both eyes for gaze estimation. The proposed model is implemented and evaluated in a simulated environment and is compared to a binocular model-based method and polynomial regression-based method with one camera and two infrared lights that average the results from both eyes. The method performs on par with methods using multiple light sources while maintaining robustness to head movements. The study shows that when using both eyes in gaze estimation models it is possible to reduce the hardware requirements while maintaining robustness.
我们提出了一种基于双目模型的方法,该方法仅使用单个摄像机和红外光源。大多数注视估计方法都是基于单眼模型的,对于双眼模型,它们通过平均每只眼睛的结果来解决。在这项工作中,我们提出了一个双眼的几何模型来估计凝视。该模型在模拟环境中进行了实现和评估,并与基于双目模型的方法和基于多项式回归的方法进行了比较,该方法使用一个摄像机和两个红外光对双眼结果进行平均。该方法的性能与使用多个光源的方法相当,同时保持头部运动的稳健性。研究表明,在使用双眼进行注视估计模型时,可以在保持鲁棒性的同时减少硬件需求。
{"title":"Binocular model-based gaze estimation with a camera and a single infrared light source","authors":"Laura Sesma, D. W. Hansen","doi":"10.1145/3204493.3204557","DOIUrl":"https://doi.org/10.1145/3204493.3204557","url":null,"abstract":"We propose a binocular model-based method that only uses a single camera and an infrared light source. Most gaze estimation approaches are based on single eye models and with binocular models they are addressed by averaging the results from each eye. In this work, we propose a geometric model of both eyes for gaze estimation. The proposed model is implemented and evaluated in a simulated environment and is compared to a binocular model-based method and polynomial regression-based method with one camera and two infrared lights that average the results from both eyes. The method performs on par with methods using multiple light sources while maintaining robustness to head movements. The study shows that when using both eyes in gaze estimation models it is possible to reduce the hardware requirements while maintaining robustness.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116377483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CBF CBF
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204559
Wolfgang Fuhl, D. Geisler, Thiago Santini, Tobias Appel, W. Rosenstiel, Enkelejda Kasneci
Modern eye tracking systems rely on fast and robust pupil detection, and several algorithms have been proposed for eye tracking under real world conditions. In this work, we propose a novel binary feature selection approach that is trained by computing conditional distributions. These features are scalable and rotatable, allowing for distinct image resolutions, and consist of simple intensity comparisons, making the approach robust to different illumination conditions as well as rapid illumination changes. The proposed method was evaluated on multiple publicly available data sets, considerably outperforming state-of-the-art methods, and being real-time capable for very high frame rates. Moreover, our method is designed to be able to sustain pupil center estimation even when typical edge-detection-based approaches fail - e.g., when the pupil outline is not visible due to occlusions from reflections or eye lids / lashes. As a consequece, it does not attempt to provide an estimate for the pupil outline. Nevertheless, the pupil center suffices for gaze estimation - e.g., by regressing the relationship between pupil center and gaze point during calibration.
{"title":"CBF","authors":"Wolfgang Fuhl, D. Geisler, Thiago Santini, Tobias Appel, W. Rosenstiel, Enkelejda Kasneci","doi":"10.1145/3204493.3204559","DOIUrl":"https://doi.org/10.1145/3204493.3204559","url":null,"abstract":"Modern eye tracking systems rely on fast and robust pupil detection, and several algorithms have been proposed for eye tracking under real world conditions. In this work, we propose a novel binary feature selection approach that is trained by computing conditional distributions. These features are scalable and rotatable, allowing for distinct image resolutions, and consist of simple intensity comparisons, making the approach robust to different illumination conditions as well as rapid illumination changes. The proposed method was evaluated on multiple publicly available data sets, considerably outperforming state-of-the-art methods, and being real-time capable for very high frame rates. Moreover, our method is designed to be able to sustain pupil center estimation even when typical edge-detection-based approaches fail - e.g., when the pupil outline is not visible due to occlusions from reflections or eye lids / lashes. As a consequece, it does not attempt to provide an estimate for the pupil outline. Nevertheless, the pupil center suffices for gaze estimation - e.g., by regressing the relationship between pupil center and gaze point during calibration.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115185177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Use of attentive information dashboards to support task resumption in working environments 使用细心的信息仪表板来支持工作环境中的任务恢复
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208348
Peyman Toreini, Moritz Langner, A. Maedche
Interruptions are known as one of the big challenges in working environments. Due to improper resuming the primary task, such interruptions may result in task resumption failures and negatively influence the task performance. This phenomenon also occurs when users are working with information dashboards in working environments. To address this problem, an attentive dashboard issuing visual feedback is developed. This feedback supports the user in resuming the primary task after the interruption by guiding the visual attention. The attentive dashboard captures visual attention allocation of the user with a low-cost screen-based eye-tracker while they are monitoring the graphs. This dashboard is sensitive to the occurrence of external interruption by tracking the eye-movement data in real-time. Moreover, based on the collected eye-movement data, two types of visual feedback are designed which highlight the last fixated graph and unnoticed ones.
被打断是工作环境中最大的挑战之一。由于恢复主任务不当,可能导致任务恢复失败,影响任务性能。当用户在工作环境中使用信息仪表板时,也会出现这种现象。为了解决这个问题,开发了一个细心的仪表板,发出视觉反馈。这种反馈通过引导视觉注意力来支持用户在中断后恢复主要任务。当用户监控图表时,注意力仪表板通过低成本的屏幕眼动仪捕捉用户的视觉注意力分配。该仪表板通过实时跟踪眼球运动数据,对外部干扰的发生非常敏感。此外,基于采集到的眼球运动数据,设计了两种视觉反馈,分别突出最后注视的图形和未注意到的图形。
{"title":"Use of attentive information dashboards to support task resumption in working environments","authors":"Peyman Toreini, Moritz Langner, A. Maedche","doi":"10.1145/3204493.3208348","DOIUrl":"https://doi.org/10.1145/3204493.3208348","url":null,"abstract":"Interruptions are known as one of the big challenges in working environments. Due to improper resuming the primary task, such interruptions may result in task resumption failures and negatively influence the task performance. This phenomenon also occurs when users are working with information dashboards in working environments. To address this problem, an attentive dashboard issuing visual feedback is developed. This feedback supports the user in resuming the primary task after the interruption by guiding the visual attention. The attentive dashboard captures visual attention allocation of the user with a low-cost screen-based eye-tracker while they are monitoring the graphs. This dashboard is sensitive to the occurrence of external interruption by tracking the eye-movement data in real-time. Moreover, based on the collected eye-movement data, two types of visual feedback are designed which highlight the last fixated graph and unnoticed ones.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122551532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Intelligent cockpit: eye tracking integration to enhance the pilot-aircraft interaction 智能座舱:眼动追踪集成,增强人机交互
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3207420
C. Lounis, Vsevolod Peysakhovich, M. Causse
In this research, we use eye tracking to monitor the attentional behavior of pilots in the cockpit. We built a cockpit monitoring database that serves as a reference for real-time assessment of the pilot's monitoring strategies, based on numerous flight simulator sessions with eye-tracking recordings. Eye tracking may also be employed as a passive input for assistive system, future studies will also explore the possibility to adapt the notifications' modality using gaze.
在本研究中,我们使用眼动追踪来监测驾驶员在驾驶舱中的注意力行为。我们建立了一个驾驶舱监控数据库,作为实时评估飞行员监控策略的参考,该数据库基于大量的飞行模拟器会话和眼球追踪记录。眼动追踪也可以作为辅助系统的被动输入,未来的研究还将探索使用凝视来调整通知方式的可能性。
{"title":"Intelligent cockpit: eye tracking integration to enhance the pilot-aircraft interaction","authors":"C. Lounis, Vsevolod Peysakhovich, M. Causse","doi":"10.1145/3204493.3207420","DOIUrl":"https://doi.org/10.1145/3204493.3207420","url":null,"abstract":"In this research, we use eye tracking to monitor the attentional behavior of pilots in the cockpit. We built a cockpit monitoring database that serves as a reference for real-time assessment of the pilot's monitoring strategies, based on numerous flight simulator sessions with eye-tracking recordings. Eye tracking may also be employed as a passive input for assistive system, future studies will also explore the possibility to adapt the notifications' modality using gaze.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117219904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Image-based scanpath comparison with slit-scan visualization 基于图像的扫描路径与裂隙扫描可视化的比较
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204581
Maurice Koch, K. Kurzhals, D. Weiskopf
The comparison of scanpaths between multiple participants is an important analysis task in eye tracking research. Established methods typically inspect recorded gaze sequences based on geometrical trajectory properties or strings derived from annotated areas of interest (AOIs). We propose a new approach based on image similarities of gaze-guided slit-scans: For each time step, a vertical slice is extracted from the stimulus at the gaze position. Placing the slices next to each other over time creates a compact representation of a scanpath in the context of the stimulus. These visual representations can be compared based on their image similarity, providing a new measure for scanpath comparison without the need for annotation. We demonstrate how comparative slit-scan visualization can be integrated into a visual analytics approach to support the interpretation of scanpath similarities in general.
在眼动追踪研究中,多参与者扫描路径的比较是一项重要的分析任务。已建立的方法通常基于几何轨迹属性或从注释感兴趣区域(aoi)派生的字符串来检查记录的凝视序列。我们提出了一种基于注视引导的裂隙扫描图像相似性的新方法:对于每一个时间步,从注视位置的刺激中提取一个垂直切片。随着时间的推移,将这些切片彼此相邻放置,可以在刺激的背景下创建一个紧凑的扫描路径表示。这些视觉表示可以基于它们的图像相似性进行比较,为扫描路径比较提供了一种不需要注释的新方法。我们演示了如何将比较裂隙扫描可视化集成到视觉分析方法中,以支持对扫描路径相似性的解释。
{"title":"Image-based scanpath comparison with slit-scan visualization","authors":"Maurice Koch, K. Kurzhals, D. Weiskopf","doi":"10.1145/3204493.3204581","DOIUrl":"https://doi.org/10.1145/3204493.3204581","url":null,"abstract":"The comparison of scanpaths between multiple participants is an important analysis task in eye tracking research. Established methods typically inspect recorded gaze sequences based on geometrical trajectory properties or strings derived from annotated areas of interest (AOIs). We propose a new approach based on image similarities of gaze-guided slit-scans: For each time step, a vertical slice is extracted from the stimulus at the gaze position. Placing the slices next to each other over time creates a compact representation of a scanpath in the context of the stimulus. These visual representations can be compared based on their image similarity, providing a new measure for scanpath comparison without the need for annotation. We demonstrate how comparative slit-scan visualization can be integrated into a visual analytics approach to support the interpretation of scanpath similarities in general.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126435704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Head and gaze control of a telepresence robot with an HMD 带头戴式显示器的远程呈现机器人的头部和凝视控制
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208330
J. P. Hansen, A. Alapetite, Martin Thomsen, Zhongyu Wang, Katsumi Minakata, Guangtao Zhang
Gaze interaction with telerobots is a new opportunity for wheelchair users with severe motor disabilities. We present a video showing how head-mounted displays (HMD) with gaze tracking can be used to monitor a robot that carries a 360° video camera and a microphone. Our interface supports autonomous driving via way-points on a map, along with gaze-controlled steering and gaze typing. It is implemented with Unity, which communicates with the Robot Operating System (ROS).
对于有严重运动障碍的轮椅使用者来说,与遥控机器人的凝视互动是一个新的机会。我们展示了一段视频,展示了如何使用带有凝视跟踪功能的头戴式显示器(HMD)来监控一个携带360°摄像机和麦克风的机器人。我们的界面支持通过地图上的路径点自动驾驶,以及凝视控制的转向和凝视输入。它是用Unity实现的,它与机器人操作系统(ROS)通信。
{"title":"Head and gaze control of a telepresence robot with an HMD","authors":"J. P. Hansen, A. Alapetite, Martin Thomsen, Zhongyu Wang, Katsumi Minakata, Guangtao Zhang","doi":"10.1145/3204493.3208330","DOIUrl":"https://doi.org/10.1145/3204493.3208330","url":null,"abstract":"Gaze interaction with telerobots is a new opportunity for wheelchair users with severe motor disabilities. We present a video showing how head-mounted displays (HMD) with gaze tracking can be used to monitor a robot that carries a 360° video camera and a microphone. Our interface supports autonomous driving via way-points on a map, along with gaze-controlled steering and gaze typing. It is implemented with Unity, which communicates with the Robot Operating System (ROS).","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134145198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Comparison of mapping algorithms for implicit calibration using probable fixation targets 使用可能固定目标的隐式校准映射算法的比较
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204529
P. Kasprowski, Katarzyna Harężlak
With growing access to cheap low end eye trackers using simple web cameras, there is also a growing demand on easy and fast usage of this devices by untrained and unsupervised end users. For such users the necessity to calibrate the eye tracker prior to its first usage is often perceived as obtrusive and inconvenient. In the same time perfect accuracy is not necessary for many commercial applications. Therefore, the idea of implicit calibration attracts more and more attention. Algorithms for implicit calibration are able to calibrate the device without any active collaboration with users. Especially, a real time implicit calibration, that is able to calibrate a device on-the-fly, while a person uses an eye tracker, seems to be a reasonable solution to the aforementioned problems. The paper presents examples of implicit calibration algorithms (including their real time versions) based on the idea of probable fixation targets (PFT). The algorithms were tested during a free viewing experiment and compared to the state of the art PFT based algorithm and explicit calibration results.
随着使用简单网络摄像头的廉价低端眼动仪越来越多,未经培训和无人监督的终端用户也越来越需要轻松快速地使用这些设备。对于这些用户来说,在第一次使用眼动仪之前校准眼动仪的必要性通常被认为是突兀和不方便的。同时,对于许多商业应用来说,完美的精度是不必要的。因此,隐式标定的思想越来越受到人们的关注。隐式校准算法能够校准设备,而无需与用户进行任何主动协作。特别是,当一个人使用眼动仪时,能够动态校准设备的实时隐式校准似乎是上述问题的合理解决方案。本文介绍了基于可能固定目标(PFT)思想的隐式校准算法(包括其实时版本)的示例。在自由观测实验中对算法进行了测试,并与基于PFT的最先进算法和显式校准结果进行了比较。
{"title":"Comparison of mapping algorithms for implicit calibration using probable fixation targets","authors":"P. Kasprowski, Katarzyna Harężlak","doi":"10.1145/3204493.3204529","DOIUrl":"https://doi.org/10.1145/3204493.3204529","url":null,"abstract":"With growing access to cheap low end eye trackers using simple web cameras, there is also a growing demand on easy and fast usage of this devices by untrained and unsupervised end users. For such users the necessity to calibrate the eye tracker prior to its first usage is often perceived as obtrusive and inconvenient. In the same time perfect accuracy is not necessary for many commercial applications. Therefore, the idea of implicit calibration attracts more and more attention. Algorithms for implicit calibration are able to calibrate the device without any active collaboration with users. Especially, a real time implicit calibration, that is able to calibrate a device on-the-fly, while a person uses an eye tracker, seems to be a reasonable solution to the aforementioned problems. The paper presents examples of implicit calibration algorithms (including their real time versions) based on the idea of probable fixation targets (PFT). The algorithms were tested during a free viewing experiment and compared to the state of the art PFT based algorithm and explicit calibration results.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134274016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Rapid alternating saccade training 快速交替扫视训练
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204588
Brent D. Parsons, R. Ivry
While individual eye movement characteristics are remarkably stable, experiments on saccadic spatial adaptation indicate that oculomotor learning is possible. To further investigate saccadic learning, participants received veridical feedback on saccade rate while making sequential saccades as quickly as possible between two horizontal targets. Over the course of five days, with just ten minutes of training per day, participants were able to significantly increase the rate of sequential saccades. This occurred through both a reduction in dwell duration and to changes in secondary saccade characteristics. There was no concomitant change in participant's accuracy or precision. The learning was retained following the training and generalized to saccades of different directions, and to reaction time measures during a delayed saccade task. The study provides evidence for a novel form of saccadic learning with applicability in a number of domains.
虽然个体眼球运动特征非常稳定,但跳眼空间适应实验表明动眼肌学习是可能的。为了进一步研究扫视学习,参与者在两个水平目标之间尽可能快地进行连续扫视时,获得了关于扫视率的真实反馈。在5天的训练过程中,每天只训练10分钟,参与者能够显著提高连续扫视的速度。这是通过停留时间的减少和二次扫视特征的变化而发生的。在参与者的准确度和精密度上没有相应的变化。训练后的学习被保留下来,并推广到不同方向的扫视,以及延迟扫视任务中的反应时间测量。该研究为跳跃式学习的新形式在许多领域的适用性提供了证据。
{"title":"Rapid alternating saccade training","authors":"Brent D. Parsons, R. Ivry","doi":"10.1145/3204493.3204588","DOIUrl":"https://doi.org/10.1145/3204493.3204588","url":null,"abstract":"While individual eye movement characteristics are remarkably stable, experiments on saccadic spatial adaptation indicate that oculomotor learning is possible. To further investigate saccadic learning, participants received veridical feedback on saccade rate while making sequential saccades as quickly as possible between two horizontal targets. Over the course of five days, with just ten minutes of training per day, participants were able to significantly increase the rate of sequential saccades. This occurred through both a reduction in dwell duration and to changes in secondary saccade characteristics. There was no concomitant change in participant's accuracy or precision. The learning was retained following the training and generalized to saccades of different directions, and to reaction time measures during a delayed saccade task. The study provides evidence for a novel form of saccadic learning with applicability in a number of domains.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130524101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Contour-guided gaze gestures: using object contours as visual guidance for triggering interactions 轮廓引导凝视手势:使用物体轮廓作为触发交互的视觉引导
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204530
Florian Jungwirth, Michael Haslgrübler, A. Ferscha
The eyes are an interesting modality for pervasive interactions, though their applicability for mobile scenarios is restricted by several issues so far. In this paper, we propose the idea of contour-guided gaze gestures, which overcome former constraints, like the need for calibration, by relying on unnatural and relative eye movements, as users trace the contours of objects in order to trigger an interaction. The interaction concept and the system design are described, along with two user studies, that demonstrate the method's applicability. It is shown that users were able to trace object contours to trigger actions from various positions on multiple different objects. It is further determined, that the proposed method is an easy to learn, hands-free interaction technique, that is robust against false positive activations. Results highlight low demand values and show that the method holds potential for further exploration, but also reveal areas for refinement.
对于无处不在的交互,眼睛是一种有趣的方式,尽管到目前为止,它们在移动场景中的适用性受到几个问题的限制。在本文中,我们提出了轮廓引导凝视手势的想法,它克服了以前的限制,比如需要校准,依靠非自然和相对的眼球运动,当用户追踪物体的轮廓以触发交互时。描述了交互概念和系统设计,以及两个用户研究,证明了该方法的适用性。结果表明,用户能够跟踪物体轮廓,从而在多个不同物体的不同位置触发动作。进一步确定,所提出的方法是一种易于学习,免提的交互技术,对假阳性激活具有鲁棒性。结果突出了低需求值,表明该方法具有进一步探索的潜力,但也揭示了需要改进的领域。
{"title":"Contour-guided gaze gestures: using object contours as visual guidance for triggering interactions","authors":"Florian Jungwirth, Michael Haslgrübler, A. Ferscha","doi":"10.1145/3204493.3204530","DOIUrl":"https://doi.org/10.1145/3204493.3204530","url":null,"abstract":"The eyes are an interesting modality for pervasive interactions, though their applicability for mobile scenarios is restricted by several issues so far. In this paper, we propose the idea of contour-guided gaze gestures, which overcome former constraints, like the need for calibration, by relying on unnatural and relative eye movements, as users trace the contours of objects in order to trigger an interaction. The interaction concept and the system design are described, along with two user studies, that demonstrate the method's applicability. It is shown that users were able to trace object contours to trigger actions from various positions on multiple different objects. It is further determined, that the proposed method is an easy to learn, hands-free interaction technique, that is robust against false positive activations. Results highlight low demand values and show that the method holds potential for further exploration, but also reveal areas for refinement.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131363946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Investigating the multicausality of processing speed deficits across developmental disorders with eye tracking and EEG: extended abstract 利用眼动追踪和脑电图研究发育障碍中加工速度缺陷的多原因:扩展摘要
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3207417
S. Dziemian, N. Langer
Neuropsychological tests inform about performance differences in cognitive functions but they typically tell little about the causes for these differences. Here, we propose a project which builds upon a recently developed novel multimodal neuroscientific approach of simultanous eye-tracking and EEG measurements to provide insights into diverse causes of performance differences in the Symbol Search Test (SST). Using a unique large dataset we plan to investigate the causes for performance differences in the SST in healthy and clinically diagnosed children and adolescents. Firstly, we aim to investigate how causes for differences in performance in the SST evolve over age in healthy, typically developing children. With this we plan to dissect aging effects from effects that are specific to developmental neuropsychiatric disorders. Secondly, we will include subjects with deficient performance to investigate different causes for bad performance to identify data-driven subgroups of poor performers.
神经心理学测试告诉我们认知功能的表现差异,但它们通常很少告诉我们这些差异的原因。在这里,我们提出了一个项目,该项目建立在最近开发的同时眼动追踪和脑电图测量的新颖多模态神经科学方法的基础上,以提供对符号搜索测试(SST)中性能差异的各种原因的见解。使用一个独特的大型数据集,我们计划调查健康和临床诊断的儿童和青少年的SST表现差异的原因。首先,我们的目的是研究导致SST表现差异的原因是如何在健康、典型的发育儿童中随着年龄的增长而演变的。有了这个,我们计划从发育性神经精神疾病的特定影响中剖析衰老的影响。其次,我们将包括表现不佳的受试者,以调查表现不佳的不同原因,以确定表现不佳的数据驱动子组。
{"title":"Investigating the multicausality of processing speed deficits across developmental disorders with eye tracking and EEG: extended abstract","authors":"S. Dziemian, N. Langer","doi":"10.1145/3204493.3207417","DOIUrl":"https://doi.org/10.1145/3204493.3207417","url":null,"abstract":"Neuropsychological tests inform about performance differences in cognitive functions but they typically tell little about the causes for these differences. Here, we propose a project which builds upon a recently developed novel multimodal neuroscientific approach of simultanous eye-tracking and EEG measurements to provide insights into diverse causes of performance differences in the Symbol Search Test (SST). Using a unique large dataset we plan to investigate the causes for performance differences in the SST in healthy and clinically diagnosed children and adolescents. Firstly, we aim to investigate how causes for differences in performance in the SST evolve over age in healthy, typically developing children. With this we plan to dissect aging effects from effects that are specific to developmental neuropsychiatric disorders. Secondly, we will include subjects with deficient performance to investigate different causes for bad performance to identify data-driven subgroups of poor performers.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123340928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1