首页 > 最新文献

2022 Symposium on Eye Tracking Research and Applications最新文献

英文 中文
Mobile Device Eye Tracking on Dynamic Visual Contents using Edge Computing and Deep Learning 基于边缘计算和深度学习的动态视觉内容移动设备眼动追踪
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3532198
N. Gunawardena, J. A. Ginige, Bahman Javadi, G. Lui
Eye-tracking has been used in various domains, including human-computer interaction, psychology, and many others. Compared to commercial eye trackers, eye tracking using off-the-shelf cameras has many advantages, such as lower cost, pervasiveness, and mobility. Quantifying human attention on the mobile device is invaluable in human-computer interaction. Like videos and mobile games, dynamic visual stimuli require higher attention than static visual stimuli such as web pages and images. This research aims to develop an accurate eye-tracking algorithm using the front-facing camera of mobile devices to identify human attention hotspots when viewing video type contents. The shortage of computational power in mobile devices becomes a challenge to obtain higher user satisfaction. Edge computing moves the processing power closer to the source of the data and reduces the latency introduced by the cloud computing. Therefore, the proposed algorithm will be extended with mobile edge computing to provide a real-time eye tracking experience for users
眼动追踪已被应用于许多领域,包括人机交互、心理学和许多其他领域。与商业眼动仪相比,使用现成的相机进行眼动追踪具有许多优势,例如成本更低、普及性和移动性。量化人们在移动设备上的注意力在人机交互中是非常宝贵的。与视频和手机游戏一样,动态视觉刺激比静态视觉刺激(如网页和图像)需要更高的注意力。本研究旨在利用移动设备前置摄像头,开发一种准确的眼球追踪算法,识别人类在观看视频类内容时的注意力热点。移动设备计算能力的不足成为获得更高用户满意度的挑战。边缘计算使处理能力更接近数据源,并减少了云计算带来的延迟。因此,本文提出的算法将扩展到移动边缘计算,为用户提供实时眼动追踪体验
{"title":"Mobile Device Eye Tracking on Dynamic Visual Contents using Edge Computing and Deep Learning","authors":"N. Gunawardena, J. A. Ginige, Bahman Javadi, G. Lui","doi":"10.1145/3517031.3532198","DOIUrl":"https://doi.org/10.1145/3517031.3532198","url":null,"abstract":"Eye-tracking has been used in various domains, including human-computer interaction, psychology, and many others. Compared to commercial eye trackers, eye tracking using off-the-shelf cameras has many advantages, such as lower cost, pervasiveness, and mobility. Quantifying human attention on the mobile device is invaluable in human-computer interaction. Like videos and mobile games, dynamic visual stimuli require higher attention than static visual stimuli such as web pages and images. This research aims to develop an accurate eye-tracking algorithm using the front-facing camera of mobile devices to identify human attention hotspots when viewing video type contents. The shortage of computational power in mobile devices becomes a challenge to obtain higher user satisfaction. Edge computing moves the processing power closer to the source of the data and reduces the latency introduced by the cloud computing. Therefore, the proposed algorithm will be extended with mobile edge computing to provide a real-time eye tracking experience for users","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121957088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Looking Confused? – Introducing a VR Game Design for Arousing Confusion Among Players 看困惑吗?-介绍一个VR游戏设计,引起玩家的困惑
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529614
M. Lankes, Maurice Sporn, A. Winkelbauer, Barbara Stiglbauer
This paper describes a virtual reality prototype’s game and level design that should serve as stimulus material to arouse confusion among players. In doing so, we will use the stimulus to create a system that analyzes the players’ gaze behavior and determine whether and when the player is confused or already frustrated. Consequently, this information can then be used by game and level designers to give the player appropriate assistance and guidance during gameplay. To reach our goal of creating a design that arouses confusion, we used guidelines for high-quality level design and did the exact opposite. We see the PLEY Workshop as a forum to identify and discuss the potentials and risks of the stimulus’ design and level structure. In general, the paper should provide insights into our design decisions for researchers interested in investigating gaze behavior in games.
本文描述了虚拟现实原型的游戏和关卡设计,它们应该作为刺激材料,引起玩家的困惑。在此过程中,我们将使用刺激创造一个系统来分析玩家的凝视行为,并确定玩家是否以及何时感到困惑或沮丧。因此,游戏和关卡设计师可以利用这些信息在游戏过程中给予玩家适当的帮助和指导。为了实现我们的目标,即创造出能够引起混淆的设计,我们使用了高质量关卡设计指南,并采取了完全相反的做法。我们将PLEY研讨会视为识别和讨论刺激计划设计和水平结构的潜力和风险的论坛。总的来说,这篇论文应该为那些对研究游戏中的凝视行为感兴趣的研究人员提供有关我们设计决策的见解。
{"title":"Looking Confused? – Introducing a VR Game Design for Arousing Confusion Among Players","authors":"M. Lankes, Maurice Sporn, A. Winkelbauer, Barbara Stiglbauer","doi":"10.1145/3517031.3529614","DOIUrl":"https://doi.org/10.1145/3517031.3529614","url":null,"abstract":"This paper describes a virtual reality prototype’s game and level design that should serve as stimulus material to arouse confusion among players. In doing so, we will use the stimulus to create a system that analyzes the players’ gaze behavior and determine whether and when the player is confused or already frustrated. Consequently, this information can then be used by game and level designers to give the player appropriate assistance and guidance during gameplay. To reach our goal of creating a design that arouses confusion, we used guidelines for high-quality level design and did the exact opposite. We see the PLEY Workshop as a forum to identify and discuss the potentials and risks of the stimulus’ design and level structure. In general, the paper should provide insights into our design decisions for researchers interested in investigating gaze behavior in games.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124413737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calibration Error Prediction: Ensuring High-Quality Mobile Eye-Tracking 校准误差预测:保证高质量移动眼动追踪
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529634
Beibin Li, JC Snider, Quan Wang, Sachin Mehta, Claire E. Foster, E. Barney, Linda G. Shapiro, P. Ventola, F. Shic
Gaze calibration is common in traditional infrared oculographic eye tracking. However, it is not well studied in visible-light mobile/remote eye tracking. We developed a lightweight real-time gaze error estimator and analyzed calibration errors from two perspectives: facial feature-based and Monte Carlo-based. Both methods correlated with gaze estimation errors, but the Monte Carlo method associated more strongly. Facial feature associations with gaze error were interpretable, relating movements of the face to the visibility of the eye. We highlight the degradation of gaze estimation quality in a sample of children with autism spectrum disorder (as compared to typical adults), and note that calibration methods may improve Euclidean error by 10%.
注视标定是传统红外眼动追踪中常见的一种方法。然而,在可见光移动/远程眼动追踪方面还没有得到很好的研究。我们开发了一个轻量级的实时凝视误差估计器,并从两个角度分析了校准误差:基于面部特征和基于蒙特卡罗。两种方法都与注视估计误差相关,但蒙特卡罗方法相关性更强。面部特征与凝视误差的关联是可以解释的,将面部运动与眼睛的可见性联系起来。我们强调了自闭症谱系障碍儿童样本中凝视估计质量的下降(与典型成年人相比),并注意到校准方法可以将欧几里得误差提高10%。
{"title":"Calibration Error Prediction: Ensuring High-Quality Mobile Eye-Tracking","authors":"Beibin Li, JC Snider, Quan Wang, Sachin Mehta, Claire E. Foster, E. Barney, Linda G. Shapiro, P. Ventola, F. Shic","doi":"10.1145/3517031.3529634","DOIUrl":"https://doi.org/10.1145/3517031.3529634","url":null,"abstract":"Gaze calibration is common in traditional infrared oculographic eye tracking. However, it is not well studied in visible-light mobile/remote eye tracking. We developed a lightweight real-time gaze error estimator and analyzed calibration errors from two perspectives: facial feature-based and Monte Carlo-based. Both methods correlated with gaze estimation errors, but the Monte Carlo method associated more strongly. Facial feature associations with gaze error were interpretable, relating movements of the face to the visibility of the eye. We highlight the degradation of gaze estimation quality in a sample of children with autism spectrum disorder (as compared to typical adults), and note that calibration methods may improve Euclidean error by 10%.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128175029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Gaze Estimation with Imperceptible Marker Displayed Dynamically using Polarization 基于偏振动态显示的难以察觉标记注视估计
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529640
Yutaro Inoue, Koki Koshikawa, K. Takemura
Conventional eye-tracking methods require NIR-LEDs at the corners and edges of displays as references. However, extensive eyeball rotation results in the loss of reflections. Therefore, we propose imperceptible markers that can be dynamically displayed using liquid crystals. Using the characteristics of polarized light, the imperceptible markers are shown on a screen as references for eye-tracking. Additionally, the marker positions can be changed using the eyeball pose in the previous frame. The point-of-gaze was determined using the imperceptible markers based on model-based eye gaze estimation. The accuracy of the estimated PoG obtained using the imperceptible marker was approximately 1.69°, higher than that obtained using NIR-LEDs. Through experiments, we confirmed the feasibility and effectiveness of relocating imperceptible markers on the screen.
传统的眼球追踪方法需要在显示器的角落和边缘放置nir - led作为参考。然而,眼球的大范围旋转会导致反射的损失。因此,我们提出了可以使用液晶动态显示的不易察觉的标记。利用偏振光的特性,将难以察觉的标记显示在屏幕上,作为眼球追踪的参考。此外,可以使用前一帧中的眼球姿势来改变标记位置。使用基于模型的人眼注视估计的不可察觉标记来确定注视点。使用不可见标记获得的PoG估计精度约为1.69°,高于使用NIR-LEDs获得的精度。通过实验,我们证实了移动屏幕上难以察觉的标记的可行性和有效性。
{"title":"Gaze Estimation with Imperceptible Marker Displayed Dynamically using Polarization","authors":"Yutaro Inoue, Koki Koshikawa, K. Takemura","doi":"10.1145/3517031.3529640","DOIUrl":"https://doi.org/10.1145/3517031.3529640","url":null,"abstract":"Conventional eye-tracking methods require NIR-LEDs at the corners and edges of displays as references. However, extensive eyeball rotation results in the loss of reflections. Therefore, we propose imperceptible markers that can be dynamically displayed using liquid crystals. Using the characteristics of polarized light, the imperceptible markers are shown on a screen as references for eye-tracking. Additionally, the marker positions can be changed using the eyeball pose in the previous frame. The point-of-gaze was determined using the imperceptible markers based on model-based eye gaze estimation. The accuracy of the estimated PoG obtained using the imperceptible marker was approximately 1.69°, higher than that obtained using NIR-LEDs. Through experiments, we confirmed the feasibility and effectiveness of relocating imperceptible markers on the screen.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132518579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention of Many Observers Visualized by Eye Movements 许多观察者的注意力通过眼球运动可视化
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529235
Teresa Hirzle, Marian Sauter, Tobias Wagner, Susanne Hummel, E. Rukzio, A. Huckauf
Interacting with a group of people requires to direct the attention of the whole group, thus requires feedback about the crowd’s attention. In face-to-face interactions, head and eye movements serve as indicator for crowd attention. However, when interacting online, such indicators are not available. To substitute this information, gaze visualizations were adapted for a crowd scenario. We developed, implemented, and evaluated four types of visualizations of crowd attention in an online study with 72 participants using lecture videos enriched with audience’s gazes. All participants reported increased connectedness to the audience, especially for visualizations depicting the whole distribution of gaze including spatial information. Visualizations avoiding spatial overlay by depicting only the variability were regarded as less helpful, for real-time as well as for retrospective analyses of lectures. Improving our visualizations of crowd attention has the potential for a broad variety of applications, in all kinds of social interaction and communication in groups.
与一群人互动需要引导整个群体的注意力,因此需要对人群的注意力进行反馈。在面对面的互动中,头部和眼睛的运动是人群注意力的指示器。然而,在网上互动时,这些指标是不可用的。为了替代这些信息,注视可视化被用于人群场景。在一项有72名参与者的在线研究中,我们开发、实施并评估了四种类型的人群注意力可视化,该研究使用了丰富了观众目光的讲座视频。所有参与者都报告说,他们与观众的联系增加了,尤其是对包括空间信息在内的整个凝视分布的可视化描述。通过只描述可变性来避免空间覆盖的可视化被认为对实时和讲座的回顾性分析帮助不大。改善我们对人群注意力的可视化,在各种各样的社会互动和群体交流中具有广泛的应用潜力。
{"title":"Attention of Many Observers Visualized by Eye Movements","authors":"Teresa Hirzle, Marian Sauter, Tobias Wagner, Susanne Hummel, E. Rukzio, A. Huckauf","doi":"10.1145/3517031.3529235","DOIUrl":"https://doi.org/10.1145/3517031.3529235","url":null,"abstract":"Interacting with a group of people requires to direct the attention of the whole group, thus requires feedback about the crowd’s attention. In face-to-face interactions, head and eye movements serve as indicator for crowd attention. However, when interacting online, such indicators are not available. To substitute this information, gaze visualizations were adapted for a crowd scenario. We developed, implemented, and evaluated four types of visualizations of crowd attention in an online study with 72 participants using lecture videos enriched with audience’s gazes. All participants reported increased connectedness to the audience, especially for visualizations depicting the whole distribution of gaze including spatial information. Visualizations avoiding spatial overlay by depicting only the variability were regarded as less helpful, for real-time as well as for retrospective analyses of lectures. Improving our visualizations of crowd attention has the potential for a broad variety of applications, in all kinds of social interaction and communication in groups.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127058113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The Benefits and Drawbacks of Eye Tracking for Improving Educational Systems 眼动追踪改善教育系统的利弊
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529242
Michael Burch, Rahel Haymoz, Sabrina Lindau
Educational systems are based on the teaching inputs from teachers and the learning outputs from the pupils/students. Moreover, also the surrounding environment as well as the social activities and networks of the involved people have an impact on the success of such a system. However, in most of them, standard teaching equipment is used while it is difficult to gain some knowledge and insights about the fine-grained spatio-temporal activities in the classroom. On the other hand, understanding why an educational system is not that successful as it was expected, can hardly be explored by just generating statistics about the learning outputs based on several student-related metrics. In this paper we discuss the benefits of using eye tracking as a powerful technology to track people’s visual attention to explore the value of an educational system, however, we also take a look at the drawbacks that come in the form of data recording, storing, processing, and finally, data analysis and visualization to rapidly gain insights in such large datasets apart from many more. We argue that if eye tracking is applied in a clever way, an educational system might draw valuable conclusions to improve it from several perspectives, be it under the light of online/remote or classroom teaching or also from the perspectives of teachers and pupils/students.
教育系统是基于教师的教学投入和学生的学习产出。此外,该系统的成功与否还取决于其所处的环境以及相关人员的社会活动和网络。然而,在大多数情况下,使用的是标准的教学设备,很难对课堂上细粒度的时空活动有一些认识和洞察。另一方面,理解为什么一个教育系统不像预期的那样成功,很难通过基于几个与学生相关的指标来生成关于学习输出的统计数据来探索。在本文中,我们讨论了使用眼动追踪作为一种强大的技术来跟踪人们的视觉注意力以探索教育系统的价值的好处,然而,我们也看了一下以数据记录,存储,处理以及数据分析和可视化的形式出现的缺点,以便在如此大的数据集中快速获得见解。我们认为,如果以一种聪明的方式应用眼动追踪,教育系统可能会从几个角度得出有价值的结论来改进它,无论是在在线/远程或课堂教学的光线下,还是从教师和学生/学生的角度。
{"title":"The Benefits and Drawbacks of Eye Tracking for Improving Educational Systems","authors":"Michael Burch, Rahel Haymoz, Sabrina Lindau","doi":"10.1145/3517031.3529242","DOIUrl":"https://doi.org/10.1145/3517031.3529242","url":null,"abstract":"Educational systems are based on the teaching inputs from teachers and the learning outputs from the pupils/students. Moreover, also the surrounding environment as well as the social activities and networks of the involved people have an impact on the success of such a system. However, in most of them, standard teaching equipment is used while it is difficult to gain some knowledge and insights about the fine-grained spatio-temporal activities in the classroom. On the other hand, understanding why an educational system is not that successful as it was expected, can hardly be explored by just generating statistics about the learning outputs based on several student-related metrics. In this paper we discuss the benefits of using eye tracking as a powerful technology to track people’s visual attention to explore the value of an educational system, however, we also take a look at the drawbacks that come in the form of data recording, storing, processing, and finally, data analysis and visualization to rapidly gain insights in such large datasets apart from many more. We argue that if eye tracking is applied in a clever way, an educational system might draw valuable conclusions to improve it from several perspectives, be it under the light of online/remote or classroom teaching or also from the perspectives of teachers and pupils/students.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"354 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115930118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
VR Cognitive Load Dashboard for Flight Simulator VR认知负荷仪表板的飞行模拟器
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529777
Somnath Arjun, Archana Hebbar, Sanjana, P. Biswas
Estimating the cognitive load of aircraft pilots is essential to monitor them constantly to identify and overcome unfavorable situations. Presently, the cognitive load of pilots is estimated using manual filling up of forms, and there is a lack of a system that can estimate workload automatically. In this paper, we used eye-tracking technology for cognitive load estimation and developed a Virtual Reality dashboard that visualizes cognitive and ocular data. We undertook a flight simulation study to observe users’ workload during primary and secondary task execution while flying the aircraft. We also undertook an eye-tracking study to identify appropriate 3D graph properties for developing graphs of the cognitive dashboard. We found a significant interaction between users’ primary and secondary tasks. We also observed that primitives like size and color were easier to encode numerical and nominal information. Finally, we developed a dashboard leveraging the results of the 3D graph study for estimating the pilot’s cognitive load. The VR dashboard enabled visualization of the cognitive load parameters derived from the ocular data in real time. The aim of the 3D graph study was to identify the optimal information to be displayed to the participants/pilots. Apart from estimating cognitive load using ocular data, the dashboard can also visualize ocular data collected in a Virtual Reality environment.
对飞行员的认知负荷进行评估,是对飞行员进行持续监测以识别和克服不利情况的必要手段。目前,飞行员的认知负荷大多是通过手工填写表格来估算的,缺乏一个能够自动估算工作量的系统。在本文中,我们使用眼动追踪技术进行认知负荷估计,并开发了一个虚拟现实仪表板,将认知和眼动数据可视化。我们进行了飞行模拟研究,观察用户在飞行过程中执行主要和次要任务时的工作量。我们还进行了一项眼动追踪研究,以确定开发认知仪表板图形的适当3D图形属性。我们发现用户的主要任务和次要任务之间存在显著的交互作用。我们还观察到,像大小和颜色这样的原语更容易编码数字和名称信息。最后,我们开发了一个仪表板,利用3D图形研究的结果来估计飞行员的认知负荷。虚拟现实仪表板可以实时显示来自眼部数据的认知负荷参数。三维图形研究的目的是确定向参与者/飞行员显示的最佳信息。除了使用眼部数据估计认知负荷外,仪表板还可以可视化在虚拟现实环境中收集的眼部数据。
{"title":"VR Cognitive Load Dashboard for Flight Simulator","authors":"Somnath Arjun, Archana Hebbar, Sanjana, P. Biswas","doi":"10.1145/3517031.3529777","DOIUrl":"https://doi.org/10.1145/3517031.3529777","url":null,"abstract":"Estimating the cognitive load of aircraft pilots is essential to monitor them constantly to identify and overcome unfavorable situations. Presently, the cognitive load of pilots is estimated using manual filling up of forms, and there is a lack of a system that can estimate workload automatically. In this paper, we used eye-tracking technology for cognitive load estimation and developed a Virtual Reality dashboard that visualizes cognitive and ocular data. We undertook a flight simulation study to observe users’ workload during primary and secondary task execution while flying the aircraft. We also undertook an eye-tracking study to identify appropriate 3D graph properties for developing graphs of the cognitive dashboard. We found a significant interaction between users’ primary and secondary tasks. We also observed that primitives like size and color were easier to encode numerical and nominal information. Finally, we developed a dashboard leveraging the results of the 3D graph study for estimating the pilot’s cognitive load. The VR dashboard enabled visualization of the cognitive load parameters derived from the ocular data in real time. The aim of the 3D graph study was to identify the optimal information to be displayed to the participants/pilots. Apart from estimating cognitive load using ocular data, the dashboard can also visualize ocular data collected in a Virtual Reality environment.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116178903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The Effect of Day and Night Mode on the Perception of Map Navigation Device 昼夜模式对地图导航设备感知的影响
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3531164
S. Popelka, A. Vondráková, Romana Skulnikova
Day and night mode is widely used when working with any digital device, including map navigation. Many users have the day and night mode change set automatically. However, it is not proven if this functionality helps improve the transfer of information between the map and the user when changing the lighting conditions. The short paper aims to evaluate the influence of day and night modes on the map users’ perception. User testing was realised in the eye-tracking laboratory with 43 participants. These participants were categorised by the average number of hours spent driving per week and their use of map navigation. The eye-tracking experiment focuses on the orientation of the participants in the day and night mode of map views when changing the lighting conditions. For that, the Euro Truck Simulator game environment was chosen, where the participants were guided by the map navigation in the bottom right corner of the screen. The lighting conditions in the ET laboratory have been adjusted to match the lighting conditions for both day and night as realistically as possible, and the map navigation mode was switched between day and night mode. The explanatory research suggested that using day mode during nighttime may cause disorientation and dazzle; using night mode during daytime does not cause that problem, but the user perception is slightly slower.
昼夜模式广泛用于任何数字设备,包括地图导航。许多用户有昼夜模式的自动切换设置。然而,当改变照明条件时,这个功能是否有助于改善地图和用户之间的信息传递还没有得到证实。这篇短文旨在评估白天和黑夜模式对地图用户感知的影响。用户测试是在眼球追踪实验室进行的,共有43名参与者。这些参与者按照每周开车的平均小时数和他们使用地图导航的情况进行分类。眼动追踪实验关注的是被试在地图视图的昼夜模式下改变光照条件时的方向。为此,我们选择了《Euro Truck Simulator》游戏环境,即在屏幕右下角的地图导航中引导参与者。对ET实验室的照明条件进行了调整,使其尽可能真实地匹配白天和黑夜的照明条件,并在白天和黑夜模式之间切换地图导航模式。解释性研究表明,在夜间使用白天模式可能会导致定向障碍和眩光;在白天使用夜间模式不会造成这个问题,但用户的感知稍微慢一些。
{"title":"The Effect of Day and Night Mode on the Perception of Map Navigation Device","authors":"S. Popelka, A. Vondráková, Romana Skulnikova","doi":"10.1145/3517031.3531164","DOIUrl":"https://doi.org/10.1145/3517031.3531164","url":null,"abstract":"Day and night mode is widely used when working with any digital device, including map navigation. Many users have the day and night mode change set automatically. However, it is not proven if this functionality helps improve the transfer of information between the map and the user when changing the lighting conditions. The short paper aims to evaluate the influence of day and night modes on the map users’ perception. User testing was realised in the eye-tracking laboratory with 43 participants. These participants were categorised by the average number of hours spent driving per week and their use of map navigation. The eye-tracking experiment focuses on the orientation of the participants in the day and night mode of map views when changing the lighting conditions. For that, the Euro Truck Simulator game environment was chosen, where the participants were guided by the map navigation in the bottom right corner of the screen. The lighting conditions in the ET laboratory have been adjusted to match the lighting conditions for both day and night as realistically as possible, and the map navigation mode was switched between day and night mode. The explanatory research suggested that using day mode during nighttime may cause disorientation and dazzle; using night mode during daytime does not cause that problem, but the user perception is slightly slower.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130702239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time head-based deep-learning model for gaze probability regions in collaborative VR 协同VR中注视概率区域的实时头部深度学习模型
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529642
Riccardo Bovo, D. Giunchi, Ludwig Sidenmark, Hans-Werner Gellersen, E. Costanza, T. Heinis
Eye behavior has gained much interest in the VR research community as an interactive input and support for collaboration. Researchers used head behavior and saliency to implement gaze inference models when eye-tracking is missing. However, these solutions are resource-demanding and thus unfit for untethered devices, and their angle accuracy is around 7°, which can be a problem in high-density informative areas. To address this issue, we propose a lightweight deep learning model that generates the probability density function of the gaze as a percentile contour. This solution allows us to introduce a visual attention representation based on a region rather than a point. In this way, we manage the trade-off between the ambiguity of a region and the error of a point. We tested our model in untethered devices with real-time performances; we evaluated its accuracy, outperforming our identified baselines (average fixation map and head direction).
眼行为作为一种互动输入和协作支持在VR研究界引起了很大的兴趣。研究人员利用头部行为和显著性来实现眼动追踪缺失时的凝视推理模型。然而,这些解决方案对资源要求很高,因此不适合非系绳设备,而且它们的角度精度在7°左右,这在高密度信息区域可能是一个问题。为了解决这个问题,我们提出了一个轻量级的深度学习模型,该模型生成凝视的概率密度函数作为百分位数轮廓。这个解决方案允许我们引入基于区域而不是点的视觉注意力表示。通过这种方式,我们可以在区域的模糊性和点的误差之间进行权衡。我们在具有实时性能的非束缚设备上测试了我们的模型;我们评估了它的准确性,优于我们确定的基线(平均注视图和头部方向)。
{"title":"Real-time head-based deep-learning model for gaze probability regions in collaborative VR","authors":"Riccardo Bovo, D. Giunchi, Ludwig Sidenmark, Hans-Werner Gellersen, E. Costanza, T. Heinis","doi":"10.1145/3517031.3529642","DOIUrl":"https://doi.org/10.1145/3517031.3529642","url":null,"abstract":"Eye behavior has gained much interest in the VR research community as an interactive input and support for collaboration. Researchers used head behavior and saliency to implement gaze inference models when eye-tracking is missing. However, these solutions are resource-demanding and thus unfit for untethered devices, and their angle accuracy is around 7°, which can be a problem in high-density informative areas. To address this issue, we propose a lightweight deep learning model that generates the probability density function of the gaze as a percentile contour. This solution allows us to introduce a visual attention representation based on a region rather than a point. In this way, we manage the trade-off between the ambiguity of a region and the error of a point. We tested our model in untethered devices with real-time performances; we evaluated its accuracy, outperforming our identified baselines (average fixation map and head direction).","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130808715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards efficient calibration for webcam eye-tracking in online experiments 网络摄像头眼动追踪在线实验的有效标定
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529645
Shreshtha Saxena, Elke B. Lange, Lauren Fink
Calibration is performed in eye-tracking studies to map raw model outputs to gaze-points on the screen and improve accuracy of gaze predictions. Calibration parameters, such as user-screen distance, camera intrinsic properties, and position of the screen with respect to the camera can be easily calculated in controlled offline setups, however, their estimation is non-trivial in unrestricted, online, experimental settings. Here, we propose the application of deep learning models for eye-tracking in online experiments, providing suitable strategies to estimate calibration parameters and perform personal gaze calibration. Focusing on fixation accuracy, we compare results with respect to calibration frequency, the time point of calibration during data collection (beginning, middle, end), and calibration procedure (fixation-point or smooth pursuit-based). Calibration using fixation and smooth pursuit tasks, pooled over three collection time-points, resulted in the best fixation accuracy. By combining device calibration, gaze calibration, and the best-performing deep-learning model, we achieve an accuracy of 2.580−a considerable improvement over reported accuracies in previous online eye-tracking studies.
在眼动追踪研究中进行校准,将原始模型输出映射到屏幕上的凝视点,提高凝视预测的准确性。校准参数,如用户屏幕距离、相机固有属性和屏幕相对于相机的位置,可以在受控的离线设置中轻松计算,然而,在不受限制的在线实验设置中,它们的估计是非平凡的。在此,我们提出将深度学习模型应用于眼动追踪的在线实验,提供合适的策略来估计校准参数并进行个人凝视校准。着眼于固定精度,我们比较了校准频率、数据收集期间的校准时间点(开始、中间、结束)和校准程序(定点或平滑追踪)的结果。校准使用固定和平滑追踪任务,汇集在三个收集时间点,导致最好的固定精度。通过结合设备校准、凝视校准和性能最好的深度学习模型,我们实现了2.580的精度,这比之前在线眼动追踪研究报告的精度有了很大的提高。
{"title":"Towards efficient calibration for webcam eye-tracking in online experiments","authors":"Shreshtha Saxena, Elke B. Lange, Lauren Fink","doi":"10.1145/3517031.3529645","DOIUrl":"https://doi.org/10.1145/3517031.3529645","url":null,"abstract":"Calibration is performed in eye-tracking studies to map raw model outputs to gaze-points on the screen and improve accuracy of gaze predictions. Calibration parameters, such as user-screen distance, camera intrinsic properties, and position of the screen with respect to the camera can be easily calculated in controlled offline setups, however, their estimation is non-trivial in unrestricted, online, experimental settings. Here, we propose the application of deep learning models for eye-tracking in online experiments, providing suitable strategies to estimate calibration parameters and perform personal gaze calibration. Focusing on fixation accuracy, we compare results with respect to calibration frequency, the time point of calibration during data collection (beginning, middle, end), and calibration procedure (fixation-point or smooth pursuit-based). Calibration using fixation and smooth pursuit tasks, pooled over three collection time-points, resulted in the best fixation accuracy. By combining device calibration, gaze calibration, and the best-performing deep-learning model, we achieve an accuracy of 2.580−a considerable improvement over reported accuracies in previous online eye-tracking studies.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116746387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2022 Symposium on Eye Tracking Research and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1