首页 > 最新文献

Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications最新文献

英文 中文
Predicting observer's task from eye movement patterns during motion image analysis 在运动图像分析中,通过眼动模式预测观察者的任务
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204575
Jutta Hild, M. Voit, Christian Kühnle, J. Beyerer
Predicting an observer's tasks from eye movements during several viewing tasks has been investigated by several authors. This contribution adds task prediction from eye movements tasks occurring during motion image analysis: Explore, Observe, Search, and Track. For this purpose, gaze data was recorded from 30 human observers viewing a motion image sequence once under each task. For task decoding, the classification methods Random Forest, LDA, and QDA were used; features were fixation- or saccade-related measures. Best accuracy for prediction of the three tasks Observe, Search, Track from the 4-minute gaze data samples was 83.7% (chance level 33%) using Random Forest. Best accuracy for prediction of all four tasks from the gaze data samples containing the first 30 seconds of viewing was 59.3% (chance level 25%) using LDA. Accuracy decreased significantly for task prediction on small gaze data chunks of 5 and 3 seconds, being 45.3% and 38.0% (chance 25%) for the four tasks, and 52.3% and 47.7% (chance 33%) for the three tasks.
几位作者研究了在几个观看任务中通过眼球运动来预测观察者的任务。这一贡献增加了在运动图像分析过程中发生的眼动任务预测:探索、观察、搜索和跟踪。为此,研究人员记录了30名人类观察者在每个任务下观看一次运动图像序列的注视数据。任务解码采用随机森林、LDA和QDA分类方法;特征是注视或眼跳相关的测量。随机森林对4分钟注视数据样本中观察、搜索、跟踪三个任务的预测准确率最高为83.7%(机会水平为33%)。从包含前30秒观看的注视数据样本中,使用LDA预测所有四个任务的最佳准确率为59.3%(机会水平为25%)。在5秒和3秒的小凝视数据块上,任务预测的准确率显著下降,4个任务的准确率分别为45.3%和38.0%(几率为25%),3个任务的准确率分别为52.3%和47.7%(几率为33%)。
{"title":"Predicting observer's task from eye movement patterns during motion image analysis","authors":"Jutta Hild, M. Voit, Christian Kühnle, J. Beyerer","doi":"10.1145/3204493.3204575","DOIUrl":"https://doi.org/10.1145/3204493.3204575","url":null,"abstract":"Predicting an observer's tasks from eye movements during several viewing tasks has been investigated by several authors. This contribution adds task prediction from eye movements tasks occurring during motion image analysis: Explore, Observe, Search, and Track. For this purpose, gaze data was recorded from 30 human observers viewing a motion image sequence once under each task. For task decoding, the classification methods Random Forest, LDA, and QDA were used; features were fixation- or saccade-related measures. Best accuracy for prediction of the three tasks Observe, Search, Track from the 4-minute gaze data samples was 83.7% (chance level 33%) using Random Forest. Best accuracy for prediction of all four tasks from the gaze data samples containing the first 30 seconds of viewing was 59.3% (chance level 25%) using LDA. Accuracy decreased significantly for task prediction on small gaze data chunks of 5 and 3 seconds, being 45.3% and 38.0% (chance 25%) for the four tasks, and 52.3% and 47.7% (chance 33%) for the three tasks.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117110212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Correlation between gaze and hovers during decision-making interaction 决策互动中凝视与悬停的相关性
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204567
Pierre Weill-Tessier, Hans-Werner Gellersen
Taps only consist of a small part of the manual input when interacting with touch-enabled surfaces. Indeed, how the hand behaves in the hovering space is informative of what the user intends to do. In this article, we present a data collection related to hand and eye motion. We tailored a kiosk-like system to record participants' gaze and hand movements. We specifically designed a memory game to detect the decision-making process users may face. Our data collection comprises of 177 trials from 71 participants. Based on a hand movement classification, we extracted 16588 hovers. We study the gaze behaviour during hovers, and we found out that the distance between gaze and hand depends on the target's location on the screen. We also showed how indecision can be deducted from this distance.
当与支持触摸的表面交互时,轻触只占手动输入的一小部分。事实上,手在悬停空间中的行为是用户打算做什么的信息。在这篇文章中,我们提出了一个有关手和眼运动的数据收集。我们定制了一个类似于报亭的系统来记录参与者的目光和手部动作。我们专门设计了一个记忆游戏来检测用户可能面临的决策过程。我们的数据收集包括来自71名参与者的177项试验。基于手部运动分类,我们提取了16588个悬停动作。我们研究了悬停时的凝视行为,发现凝视与手之间的距离取决于目标在屏幕上的位置。我们还展示了如何从这个距离中扣除优柔寡断。
{"title":"Correlation between gaze and hovers during decision-making interaction","authors":"Pierre Weill-Tessier, Hans-Werner Gellersen","doi":"10.1145/3204493.3204567","DOIUrl":"https://doi.org/10.1145/3204493.3204567","url":null,"abstract":"Taps only consist of a small part of the manual input when interacting with touch-enabled surfaces. Indeed, how the hand behaves in the hovering space is informative of what the user intends to do. In this article, we present a data collection related to hand and eye motion. We tailored a kiosk-like system to record participants' gaze and hand movements. We specifically designed a memory game to detect the decision-making process users may face. Our data collection comprises of 177 trials from 71 participants. Based on a hand movement classification, we extracted 16588 hovers. We study the gaze behaviour during hovers, and we found out that the distance between gaze and hand depends on the target's location on the screen. We also showed how indecision can be deducted from this distance.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"2233 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127471020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Virtual reality as a proxy for real-life social attention? 虚拟现实作为现实社会关注的代表?
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3207411
Marius Rubo, M. Gamer
Previous studies found large amounts of overt attention allocated towards human faces when they were presented as images or videos, but a relative avoidance of gaze at conspecifics' faces in real-world situations. We measured gaze behavior in a complex virtual scenario in which a human face and an object were similarily exposed to the participants' view. Gaze at the face was avoided compared to gaze at the object, providing support for the hypothesis that virtual reality scenarios are capable of eliciting modes of information processing comparable to real-world situations.
先前的研究发现,当以图像或视频的形式呈现人脸时,人们会将大量的公开注意力集中在人脸上,但在现实世界中,人们会相对避免盯着人脸。我们在一个复杂的虚拟场景中测量了凝视行为,在这个场景中,一个人脸和一个物体同样暴露在参与者的视野中。与凝视物体相比,避免了对面部的凝视,这为虚拟现实场景能够引发与现实世界类似的信息处理模式的假设提供了支持。
{"title":"Virtual reality as a proxy for real-life social attention?","authors":"Marius Rubo, M. Gamer","doi":"10.1145/3204493.3207411","DOIUrl":"https://doi.org/10.1145/3204493.3207411","url":null,"abstract":"Previous studies found large amounts of overt attention allocated towards human faces when they were presented as images or videos, but a relative avoidance of gaze at conspecifics' faces in real-world situations. We measured gaze behavior in a complex virtual scenario in which a human face and an object were similarily exposed to the participants' view. Gaze at the face was avoided compared to gaze at the object, providing support for the hypothesis that virtual reality scenarios are capable of eliciting modes of information processing comparable to real-world situations.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115189538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Pupil size as an indicator of visual-motor workload and expertise in microsurgical training tasks 瞳孔大小作为视觉运动负荷和显微外科训练任务专长的指标
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204577
R. Bednarik, P. Bartczak, Hana Vrzakova, Jani Koskinen, A. Elomaa, Antti Huotarinen, David Gil de Gómez Pérez, M. Fraunberg
Pupillary responses have been for long linked to cognitive workload in numerous tasks. In this work, we investigate the role of pupil dilations in the context of microsurgical training, handling of microinstruments and the suturing act in particular. With an eye-tracker embedded on the surgical microscope oculars, eleven medical participants repeated 12 sutures of artificial skin under high magnification. A detailed analysis of pupillary dilations in suture segments revealed that pupillary responses indeed varied not only according to the main suture segments but also in relation to participants' expertise.
长期以来,瞳孔反应一直与许多任务中的认知工作量有关。在这项工作中,我们研究了瞳孔扩张在显微外科训练、显微器械处理和缝合行为中的作用。在手术显微镜上植入眼动仪,11名医疗参与者在高倍镜下重复了12次人造皮肤缝合。对缝合段瞳孔扩张的详细分析表明,瞳孔反应确实不仅根据主要缝合段而变化,而且与参与者的专业知识有关。
{"title":"Pupil size as an indicator of visual-motor workload and expertise in microsurgical training tasks","authors":"R. Bednarik, P. Bartczak, Hana Vrzakova, Jani Koskinen, A. Elomaa, Antti Huotarinen, David Gil de Gómez Pérez, M. Fraunberg","doi":"10.1145/3204493.3204577","DOIUrl":"https://doi.org/10.1145/3204493.3204577","url":null,"abstract":"Pupillary responses have been for long linked to cognitive workload in numerous tasks. In this work, we investigate the role of pupil dilations in the context of microsurgical training, handling of microinstruments and the suturing act in particular. With an eye-tracker embedded on the surgical microscope oculars, eleven medical participants repeated 12 sutures of artificial skin under high magnification. A detailed analysis of pupillary dilations in suture segments revealed that pupillary responses indeed varied not only according to the main suture segments but also in relation to participants' expertise.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122948105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Wearable eye tracker calibration at your fingertips 可穿戴眼动仪校准在您的指尖
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204592
Mihai Bâce, S. Staal, Gábor Sörös
Common calibration techniques for head-mounted eye trackers rely on markers or an additional person to assist with the procedure. This is a tedious process and may even hinder some practical applications. We propose a novel calibration technique which simplifies the initial calibration step for mobile scenarios. To collect the calibration samples, users only have to point with a finger to various locations in the scene. Our vision-based algorithm detects the users' hand and fingertips which indicate the users' point of interest. This eliminates the need for additional assistance or specialized markers. Our approach achieves comparable accuracy to similar marker-based calibration techniques and is the preferred method by users from our study. The implementation is openly available as a plugin for the open-source Pupil eye tracking platform.
头戴式眼动仪的常见校准技术依赖于标记或一个额外的人来协助操作。这是一个繁琐的过程,甚至可能阻碍一些实际应用。我们提出了一种新的校准技术,简化了移动场景下的初始校准步骤。为了收集校准样本,用户只需要用手指指向场景中的不同位置。我们基于视觉的算法检测用户的手和指尖,这表明用户的兴趣点。这样就不需要额外的帮助或专门的标记。我们的方法达到了与类似的基于标记的校准技术相当的精度,并且是我们研究中用户首选的方法。该实现作为开源瞳孔眼动追踪平台的插件公开可用。
{"title":"Wearable eye tracker calibration at your fingertips","authors":"Mihai Bâce, S. Staal, Gábor Sörös","doi":"10.1145/3204493.3204592","DOIUrl":"https://doi.org/10.1145/3204493.3204592","url":null,"abstract":"Common calibration techniques for head-mounted eye trackers rely on markers or an additional person to assist with the procedure. This is a tedious process and may even hinder some practical applications. We propose a novel calibration technique which simplifies the initial calibration step for mobile scenarios. To collect the calibration samples, users only have to point with a finger to various locations in the scene. Our vision-based algorithm detects the users' hand and fingertips which indicate the users' point of interest. This eliminates the need for additional assistance or specialized markers. Our approach achieves comparable accuracy to similar marker-based calibration techniques and is the preferred method by users from our study. The implementation is openly available as a plugin for the open-source Pupil eye tracking platform.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128743688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Dwell time reduction technique using Fitts' law for gaze-based target acquisition 基于Fitts定律的注视目标捕获停留时间减少技术
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204532
Toshiya Isomoto, Toshiyuki Ando, B. Shizuki, Shin Takahashi
We present a dwell time reduction technique for gaze-based target acquisition. We adopt Fitts' Law to achieve the dwell time reduction. Our technique uses both the eye movement time for target acquisition estimated using Fitts' Law (Te) and the actual eye movement time (Ta) for target acquisition; a target is acquired when the difference between Te and Ta is small. First, we investigated the relation between the eye movement for target acquisition and Fitts' Law; the result indicated a correlation of 0.90 after error correction. Then we designed and implemented our technique. Finally, we conducted a user study to investigate the performance of our technique; an average dwell time of 86.7 ms was achieved, with a 10.0% Midas-touch rate.
我们提出了一种基于注视的目标捕获的停留时间减少技术。我们采用菲茨定律来减少停留时间。我们的技术使用Fitts定律估计的眼球运动时间(Te)和实际眼球运动时间(Ta)来获取目标;当Te和Ta之间的差值很小时,即可获得目标。首先,研究了目标获取眼动与Fitts定律的关系;误差修正后的相关系数为0.90。然后我们设计并实现了我们的技术。最后,我们进行了一项用户研究来调查我们的技术的性能;平均停留时间为86.7 ms,触点率为10.0%。
{"title":"Dwell time reduction technique using Fitts' law for gaze-based target acquisition","authors":"Toshiya Isomoto, Toshiyuki Ando, B. Shizuki, Shin Takahashi","doi":"10.1145/3204493.3204532","DOIUrl":"https://doi.org/10.1145/3204493.3204532","url":null,"abstract":"We present a dwell time reduction technique for gaze-based target acquisition. We adopt Fitts' Law to achieve the dwell time reduction. Our technique uses both the eye movement time for target acquisition estimated using Fitts' Law (Te) and the actual eye movement time (Ta) for target acquisition; a target is acquired when the difference between Te and Ta is small. First, we investigated the relation between the eye movement for target acquisition and Fitts' Law; the result indicated a correlation of 0.90 after error correction. Then we designed and implemented our technique. Finally, we conducted a user study to investigate the performance of our technique; an average dwell time of 86.7 ms was achieved, with a 10.0% Midas-touch rate.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129090956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Real-time gaze transition entropy 实时注视转移熵
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208340
Islam Akef Ebeid, J. Gwizdka
In this video, we introduce a real-time algorithm that computes gaze transition entropy. This approach can be employed in detecting higher level cognitive states such as situation awareness. We first compute fixations using our real-time version of a well established velocity threshold based algorithm. We then compute the gaze transition entropy for a content independent grid of areas of interest in real-time using an update processing window approach. We test for Markov property after each update to test whether Markov assumption holds. Higher entropy corresponds to increased eye movement and more frequent monitoring of the visual field. In contrast, lower entropy corresponds to fewer eye movements and less frequent monitoring. Based on entropy levels, the system could then alert the user accordingly and plausibly offer an intervention. We developed an example application to demonstrate the use of the online calculation of gaze transition entropy in a practical scenario.
在本视频中,我们将介绍一种计算凝视转移熵的实时算法。这种方法可以用于检测更高层次的认知状态,如情境感知。我们首先使用基于速度阈值算法的实时版本来计算固定。然后,我们使用更新处理窗口方法实时计算感兴趣区域的内容独立网格的注视转移熵。我们在每次更新后测试马尔可夫属性,以测试马尔可夫假设是否成立。更高的熵对应于更多的眼球运动和更频繁的视野监测。相反,较低的熵对应较少的眼球运动和较少的监测频率。基于熵水平,系统可以相应地提醒用户,并提供合理的干预。我们开发了一个示例应用程序来演示在实际场景中使用在线计算凝视转移熵。
{"title":"Real-time gaze transition entropy","authors":"Islam Akef Ebeid, J. Gwizdka","doi":"10.1145/3204493.3208340","DOIUrl":"https://doi.org/10.1145/3204493.3208340","url":null,"abstract":"In this video, we introduce a real-time algorithm that computes gaze transition entropy. This approach can be employed in detecting higher level cognitive states such as situation awareness. We first compute fixations using our real-time version of a well established velocity threshold based algorithm. We then compute the gaze transition entropy for a content independent grid of areas of interest in real-time using an update processing window approach. We test for Markov property after each update to test whether Markov assumption holds. Higher entropy corresponds to increased eye movement and more frequent monitoring of the visual field. In contrast, lower entropy corresponds to fewer eye movements and less frequent monitoring. Based on entropy levels, the system could then alert the user accordingly and plausibly offer an intervention. We developed an example application to demonstrate the use of the online calculation of gaze transition entropy in a practical scenario.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116307894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Automatic mapping of gaze position coordinates of eye-tracking glasses video on a common static reference image 眼球追踪眼镜视频注视位置坐标在普通静态参考图像上的自动映射
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208331
Adam Bykowski, Szymon Kupiński
This paper describes a method for automatic semantic gaze mapping from video obtained by eye-tracking glasses to a common reference image. Image feature detection and description algorithms are utilized to find the position of subsequent video frames and map corresponding gaze coordinates on a common reference image. This process allows aggregate experiment results for further experiment analysis and provides an alternative for manual semantic gaze mapping methods.
本文描述了一种从眼动追踪眼镜获取的视频到公共参考图像的自动语义凝视映射方法。图像特征检测和描述算法用于找到后续视频帧的位置,并将相应的凝视坐标映射到公共参考图像上。该过程允许汇总实验结果以进行进一步的实验分析,并为手动语义注视映射方法提供替代方案。
{"title":"Automatic mapping of gaze position coordinates of eye-tracking glasses video on a common static reference image","authors":"Adam Bykowski, Szymon Kupiński","doi":"10.1145/3204493.3208331","DOIUrl":"https://doi.org/10.1145/3204493.3208331","url":null,"abstract":"This paper describes a method for automatic semantic gaze mapping from video obtained by eye-tracking glasses to a common reference image. Image feature detection and description algorithms are utilized to find the position of subsequent video frames and map corresponding gaze coordinates on a common reference image. This process allows aggregate experiment results for further experiment analysis and provides an alternative for manual semantic gaze mapping methods.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126205217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Development and evaluation of a gaze feedback system integrated into eyetrace 眼动融合注视反馈系统的开发与评价
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204561
Kai Otto, Nora Castner, David Geisler, Enkelejda Kasneci
A growing field of studies in eye-tracking is the use of gaze data for realtime feedback to the subject. In this work, we present a software system for such experiments and validate it with a visual search task experiment. This system was integrated into an eye tracking analysis tool. Our aim was to improve subject performance in this task by employing saliency features for gaze guidance. This realtime feedback system can be applicable within many realms, such as learning interventions, computer entertainment, or virtual reality.
眼球追踪研究的一个新兴领域是使用凝视数据向被试进行实时反馈。在这项工作中,我们提出了一个用于这种实验的软件系统,并通过视觉搜索任务实验对其进行了验证。该系统被集成到眼动追踪分析工具中。我们的目标是通过使用显著性特征进行凝视引导来提高受试者在这项任务中的表现。这种实时反馈系统可以应用于许多领域,例如学习干预、计算机娱乐或虚拟现实。
{"title":"Development and evaluation of a gaze feedback system integrated into eyetrace","authors":"Kai Otto, Nora Castner, David Geisler, Enkelejda Kasneci","doi":"10.1145/3204493.3204561","DOIUrl":"https://doi.org/10.1145/3204493.3204561","url":null,"abstract":"A growing field of studies in eye-tracking is the use of gaze data for realtime feedback to the subject. In this work, we present a software system for such experiments and validate it with a visual search task experiment. This system was integrated into an eye tracking analysis tool. Our aim was to improve subject performance in this task by employing saliency features for gaze guidance. This realtime feedback system can be applicable within many realms, such as learning interventions, computer entertainment, or virtual reality.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121936996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Anyorbit: orbital navigation in virtual environments with eye-tracking Anyorbit:在虚拟环境中进行眼球追踪的轨道导航
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204555
B. Outram, Yun Suen Pai, Tanner Person, K. Minamizawa, K. Kunze
Gaze-based interactions promise to be fast, intuitive and effective in controlling virtual and augmented environments. Yet, there is still a lack of usable 3D navigation and observation techniques. In this work: 1) We introduce a highly advantageous orbital navigation technique, AnyOrbit, providing an intuitive and hands-free method of observation in virtual environments that uses eye-tracking to control the orbital center of movement; 2) The versatility of the technique is demonstrated with several control schemes and use-cases in virtual/augmented reality head-mounted-display and desktop setups, including observation of 3D astronomical data and spectator sports.
基于注视的交互有望在控制虚拟和增强环境方面快速、直观和有效。然而,仍然缺乏可用的3D导航和观测技术。在这项工作中:1)我们引入了一种非常有优势的轨道导航技术AnyOrbit,提供了一种在虚拟环境中使用眼动来控制轨道运动中心的直观和免提的观察方法;2)该技术的多功能性通过虚拟/增强现实头戴式显示器和桌面设置中的几种控制方案和用例进行了演示,包括观察3D天文数据和观赏性运动。
{"title":"Anyorbit: orbital navigation in virtual environments with eye-tracking","authors":"B. Outram, Yun Suen Pai, Tanner Person, K. Minamizawa, K. Kunze","doi":"10.1145/3204493.3204555","DOIUrl":"https://doi.org/10.1145/3204493.3204555","url":null,"abstract":"Gaze-based interactions promise to be fast, intuitive and effective in controlling virtual and augmented environments. Yet, there is still a lack of usable 3D navigation and observation techniques. In this work: 1) We introduce a highly advantageous orbital navigation technique, AnyOrbit, providing an intuitive and hands-free method of observation in virtual environments that uses eye-tracking to control the orbital center of movement; 2) The versatility of the technique is demonstrated with several control schemes and use-cases in virtual/augmented reality head-mounted-display and desktop setups, including observation of 3D astronomical data and spectator sports.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125761025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
期刊
Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1