首页 > 最新文献

2022 Symposium on Eye Tracking Research and Applications最新文献

英文 中文
Faster, Better Blink Detection through Curriculum Learning by Augmentation 更快,更好的眨眼检测通过增强课程学习
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529617
A. Al-Hindawi, Marcela P. Vizcaychipi, Y. Demiris
Blinking is a useful biological signal that can gate gaze regression models to avoid the use of incorrect data in downstream tasks. Existing datasets are imbalanced both in frequency of class but also in intra-class difficulty which we demonstrate is a barrier for curriculum learning. We thus propose a novel curriculum augmentation scheme that aims to address frequency and difficulty imbalances implicitly which are are terming Curriculum Learning by Augmentation (CLbA). Using Curriculum Learning by Augmentation (CLbA), we achieve a state-of-the-art performance of mean Average Precision (mAP) 0.971 using ResNet-18 up from the previous state-of-the-art of mean Average Precision (mAP) of 0.757 using DenseNet-121 whilst outcompeting Curriculum Learning by Bootstrapping (CLbB) by a significant margin with improved calibration. This new training scheme thus allows the use of smaller and more performant Convolutional Neural Network (CNN) backbones fulfilling Nyquist criteria to achieve a sampling frequency of 102.3Hz. This paves the way for inference of blinking in real-time applications.
眨眼是一种有用的生物信号,它可以控制注视回归模型,避免在后续任务中使用错误的数据。现有的数据集在课堂频率和课堂难度上都是不平衡的,我们证明这是课程学习的障碍。因此,我们提出了一种新的课程增强方案,旨在解决频率和难度的不平衡,这些不平衡被称为增强课程学习(CLbA)。使用增强课程学习(CLbA),我们使用ResNet-18实现了平均平均精度(mAP) 0.971的最先进性能,高于之前使用DenseNet-121的平均平均精度(mAP) 0.757的最先进性能,同时通过改进的校准大大优于通过引导的课程学习(CLbB)。因此,这种新的训练方案允许使用更小、更高性能的卷积神经网络(CNN)骨干网,满足奈奎斯特标准,以实现102.3Hz的采样频率。这为实时应用中的眨眼推理铺平了道路。
{"title":"Faster, Better Blink Detection through Curriculum Learning by Augmentation","authors":"A. Al-Hindawi, Marcela P. Vizcaychipi, Y. Demiris","doi":"10.1145/3517031.3529617","DOIUrl":"https://doi.org/10.1145/3517031.3529617","url":null,"abstract":"Blinking is a useful biological signal that can gate gaze regression models to avoid the use of incorrect data in downstream tasks. Existing datasets are imbalanced both in frequency of class but also in intra-class difficulty which we demonstrate is a barrier for curriculum learning. We thus propose a novel curriculum augmentation scheme that aims to address frequency and difficulty imbalances implicitly which are are terming Curriculum Learning by Augmentation (CLbA). Using Curriculum Learning by Augmentation (CLbA), we achieve a state-of-the-art performance of mean Average Precision (mAP) 0.971 using ResNet-18 up from the previous state-of-the-art of mean Average Precision (mAP) of 0.757 using DenseNet-121 whilst outcompeting Curriculum Learning by Bootstrapping (CLbB) by a significant margin with improved calibration. This new training scheme thus allows the use of smaller and more performant Convolutional Neural Network (CNN) backbones fulfilling Nyquist criteria to achieve a sampling frequency of 102.3Hz. This paves the way for inference of blinking in real-time applications.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117054951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fairness in Oculomotoric Biometric Identification 眼动生物特征识别的公平性
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529633
Paul Prasse, D. R. Reich, Silvia Makowski, L. Jäger, T. Scheffer
Gaze patterns are known to be highly individual, and therefore eye movements can serve as a biometric characteristic. We explore aspects of the fairness of biometric identification based on gaze patterns. We find that while oculomotoric identification does not favor any particular gender and does not significantly favor by age range, it is unfair with respect to ethnicity. Moreover, fairness concerning ethnicity cannot be achieved by balancing the training data for the best-performing model.
众所周知,注视模式是高度个性化的,因此眼球运动可以作为一种生物特征。我们探讨了基于注视模式的生物特征识别的公平性方面。我们发现,虽然动眼力识别并不倾向于任何特定的性别,也不明显倾向于年龄范围,但就种族而言,这是不公平的。此外,关于种族的公平性不能通过平衡表现最好的模型的训练数据来实现。
{"title":"Fairness in Oculomotoric Biometric Identification","authors":"Paul Prasse, D. R. Reich, Silvia Makowski, L. Jäger, T. Scheffer","doi":"10.1145/3517031.3529633","DOIUrl":"https://doi.org/10.1145/3517031.3529633","url":null,"abstract":"Gaze patterns are known to be highly individual, and therefore eye movements can serve as a biometric characteristic. We explore aspects of the fairness of biometric identification based on gaze patterns. We find that while oculomotoric identification does not favor any particular gender and does not significantly favor by age range, it is unfair with respect to ethnicity. Moreover, fairness concerning ethnicity cannot be achieved by balancing the training data for the best-performing model.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"10884 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116840720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Inferring Native and Non-Native Human Reading Comprehension and Subjective Text Difficulty from Scanpaths in Reading 从阅读扫描路径推断母语和非母语人类阅读理解和主观文本难度
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529639
David Reich, Paul Prasse, Chiara Tschirner, Patrick Haller, Frank Goldhammer, L. Jäger
Eye movements in reading are known to reflect cognitive processes involved in reading comprehension at all linguistic levels, from the sub-lexical to the discourse level. This means that reading comprehension and other properties of the text and/or the reader should be possible to infer from eye movements. Consequently, we develop the first neural sequence architecture for this type of tasks which models scan paths in reading and incorporates lexical, semantic and other linguistic features of the stimulus text. Our proposed model outperforms state-of-the-art models in various tasks. These include inferring reading comprehension or text difficulty, and assessing whether the reader is a native speaker of the text’s language. We further conduct an ablation study to investigate the impact of each component of our proposed neural network on its performance.
阅读中的眼动反映了从亚词汇到语篇的各个语言层面的阅读理解认知过程。这意味着阅读理解和文本和/或读者的其他属性应该可以从眼球运动中推断出来。因此,我们为这类任务开发了第一个神经序列架构,该架构模拟了阅读中的扫描路径,并结合了刺激文本的词汇、语义和其他语言特征。我们提出的模型在各种任务中优于最先进的模型。这些包括推断阅读理解或文本难度,以及评估读者是否是文本语言的母语人士。我们进一步进行消融研究,以研究我们提出的神经网络的每个组成部分对其性能的影响。
{"title":"Inferring Native and Non-Native Human Reading Comprehension and Subjective Text Difficulty from Scanpaths in Reading","authors":"David Reich, Paul Prasse, Chiara Tschirner, Patrick Haller, Frank Goldhammer, L. Jäger","doi":"10.1145/3517031.3529639","DOIUrl":"https://doi.org/10.1145/3517031.3529639","url":null,"abstract":"Eye movements in reading are known to reflect cognitive processes involved in reading comprehension at all linguistic levels, from the sub-lexical to the discourse level. This means that reading comprehension and other properties of the text and/or the reader should be possible to infer from eye movements. Consequently, we develop the first neural sequence architecture for this type of tasks which models scan paths in reading and incorporates lexical, semantic and other linguistic features of the stimulus text. Our proposed model outperforms state-of-the-art models in various tasks. These include inferring reading comprehension or text difficulty, and assessing whether the reader is a native speaker of the text’s language. We further conduct an ablation study to investigate the impact of each component of our proposed neural network on its performance.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"190 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116979076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
User Perception of Smooth Pursuit Target Speed 用户感知平滑追求目标速度
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529234
Heiko Drewes, Sophia Sakel, H. Hussmann
Gaze-aware interfaces should work on all display sizes. This paper researches whether angular velocity or tangential speed should be kept when scaling a gaze-aware interface based on circular smooth pursuits to another display size. We also address the question of which target speed and which trajectory size feels most comfortable for the users. We present the results of a user study where the participants were asked how they perceived the speed and the radius of a circular moving smooth pursuit target. The data show that the users’ judgment of the optimal speed corresponds with an optimal detection rate. The results also enable us to give an optimal value pair for target speed and trajectory radius. Additionally, we give a functional relation on how to adapt the target speed when scaling the geometry to keep optimal detection rate and user experience.
注视感知界面应该适用于所有的显示尺寸。本文研究了将基于圆形平滑追踪的注视感知界面缩放到另一显示尺寸时,应保持角速度还是切向速度。我们还解决了哪个目标速度和哪个轨迹大小对用户来说最舒服的问题。我们展示了一项用户研究的结果,参与者被问及他们如何感知一个圆形平滑移动目标的速度和半径。数据表明,用户对最佳速度的判断与最佳检测率相对应。结果也使我们能够给出目标速度和轨迹半径的最优值对。此外,我们给出了在缩放几何图形时如何调整目标速度以保持最佳检测率和用户体验的函数关系。
{"title":"User Perception of Smooth Pursuit Target Speed","authors":"Heiko Drewes, Sophia Sakel, H. Hussmann","doi":"10.1145/3517031.3529234","DOIUrl":"https://doi.org/10.1145/3517031.3529234","url":null,"abstract":"Gaze-aware interfaces should work on all display sizes. This paper researches whether angular velocity or tangential speed should be kept when scaling a gaze-aware interface based on circular smooth pursuits to another display size. We also address the question of which target speed and which trajectory size feels most comfortable for the users. We present the results of a user study where the participants were asked how they perceived the speed and the radius of a circular moving smooth pursuit target. The data show that the users’ judgment of the optimal speed corresponds with an optimal detection rate. The results also enable us to give an optimal value pair for target speed and trajectory radius. Additionally, we give a functional relation on how to adapt the target speed when scaling the geometry to keep optimal detection rate and user experience.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121758770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Comparison of Webcam and Remote Eye Tracking 网络摄像头与远程眼动追踪的比较
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529615
K. Wisiecka, Krzysztof Krejtz, I. Krejtz, Damian Sromek, Adam Cellary, Beata Lewandowska, A. Duchowski
We compare the measurement error and validity of webcam-based eye tracking to that of a remote eye tracker as well as software integration of both. We ran a study with n = 83 participants, consisting of a point detection task and an emotional visual search task under three between-subjects experimental conditions (webcam-based, remote, and integrated). We analyzed location-based (e.g., fixations) and process-based eye tracking metrics (ambient-focal attention dynamics). Despite higher measurement error of webcam eye tracking, our results in all three experimental conditions were in line with theoretical expectations. For example, time to first fixation toward happy faces was significantly shorter than toward sad faces (the happiness-superiority effect). As expected, we also observed the switch from ambient to focal attention depending on complexity of the visual stimuli. We conclude that webcam-based eye tracking is a viable, low-cost alternative to remote eye tracking.
我们比较了基于网络摄像头的眼动追踪与远程眼动追踪的测量误差和有效性,以及两者的软件集成。我们进行了一项n = 83名参与者的研究,包括在三种受试者之间的实验条件下(基于网络摄像头、远程和集成)进行点检测任务和情感视觉搜索任务。我们分析了基于位置(例如,注视)和基于过程的眼动追踪指标(环境焦点注意力动力学)。尽管网络摄像头眼动追踪的测量误差较高,但我们在三种实验条件下的结果都符合理论预期。例如,第一次注视快乐面孔的时间明显短于第一次注视悲伤面孔的时间(快乐优势效应)。正如预期的那样,我们还观察到根据视觉刺激的复杂性,从环境注意力到焦点注意力的转换。我们得出结论,基于网络摄像头的眼动追踪是一种可行的、低成本的远程眼动追踪替代方案。
{"title":"Comparison of Webcam and Remote Eye Tracking","authors":"K. Wisiecka, Krzysztof Krejtz, I. Krejtz, Damian Sromek, Adam Cellary, Beata Lewandowska, A. Duchowski","doi":"10.1145/3517031.3529615","DOIUrl":"https://doi.org/10.1145/3517031.3529615","url":null,"abstract":"We compare the measurement error and validity of webcam-based eye tracking to that of a remote eye tracker as well as software integration of both. We ran a study with n = 83 participants, consisting of a point detection task and an emotional visual search task under three between-subjects experimental conditions (webcam-based, remote, and integrated). We analyzed location-based (e.g., fixations) and process-based eye tracking metrics (ambient-focal attention dynamics). Despite higher measurement error of webcam eye tracking, our results in all three experimental conditions were in line with theoretical expectations. For example, time to first fixation toward happy faces was significantly shorter than toward sad faces (the happiness-superiority effect). As expected, we also observed the switch from ambient to focal attention depending on complexity of the visual stimuli. We conclude that webcam-based eye tracking is a viable, low-cost alternative to remote eye tracking.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132687293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Scanpath Comparison using ScanGraph for Education and Learning Purposes: Summary of previous educational studies performed with the use of ScanGraph 使用ScanGraph进行教育和学习目的的扫描路径比较:使用ScanGraph进行以前的教育研究的总结
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529243
S. Popelka, Marketa Beitlova
∗ The short paper describes the summary of previous studies from the area of education where a developed tool for scanpath comparison called ScanGraph has been used so far. This paper aims to introduce this freely available online tool to the community of eye movement researchers focusing on eye-tracking in education. ScanGraph allows calculation of similarity using Levenshtein and Damerau-Levenshtein algorithms and the Needleman-Wunsch algorithm (similar to ScanMatch). The results are visualized in a simple graph showing similarities among individual participants. The tool allows exporting similarity matrix, which might be further used for more detailed analysis. Moreover, it is possible to visualize similarity data calculated using the MultiMatch method. In the article, the tool’s functionality is described and introduced on the examples of case studies from the field of geographic education and physics.
这篇短文描述了教育领域先前研究的总结,在教育领域,一种叫做ScanGraph的扫描路径比较工具已经被使用到目前为止。本文旨在将这个免费的在线工具介绍给关注眼动追踪教育的眼动研究人员。ScanGraph允许使用Levenshtein和Damerau-Levenshtein算法和Needleman-Wunsch算法(类似于ScanMatch)计算相似性。结果用一个简单的图表显示了个体参与者之间的相似性。该工具允许导出相似性矩阵,可以进一步用于更详细的分析。此外,还可以将使用multimmatch方法计算的相似度数据可视化。本文以地理教育和物理领域的案例研究为例,对该工具的功能进行了描述和介绍。
{"title":"Scanpath Comparison using ScanGraph for Education and Learning Purposes: Summary of previous educational studies performed with the use of ScanGraph","authors":"S. Popelka, Marketa Beitlova","doi":"10.1145/3517031.3529243","DOIUrl":"https://doi.org/10.1145/3517031.3529243","url":null,"abstract":"∗ The short paper describes the summary of previous studies from the area of education where a developed tool for scanpath comparison called ScanGraph has been used so far. This paper aims to introduce this freely available online tool to the community of eye movement researchers focusing on eye-tracking in education. ScanGraph allows calculation of similarity using Levenshtein and Damerau-Levenshtein algorithms and the Needleman-Wunsch algorithm (similar to ScanMatch). The results are visualized in a simple graph showing similarities among individual participants. The tool allows exporting similarity matrix, which might be further used for more detailed analysis. Moreover, it is possible to visualize similarity data calculated using the MultiMatch method. In the article, the tool’s functionality is described and introduced on the examples of case studies from the field of geographic education and physics.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114268302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Linked and Coordinated Visual Analysis of Eye Movement Data 眼动数据的关联和协调视觉分析
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3531163
Michael Burch, Günter Wallner, Veerle Fürst, Teodor-Cristian Lungu, Daan Boelhouwers, Dhiksha Rajasekaran, Richard Farla, Sander van Heesch
Eye movement data can be used for a variety of research in marketing, advertisement, and other design-related industries to gain interesting insights into customer preferences. However, interpreting such data can be a challenging task due to its spatio-temporal complexity. In this paper we describe a web-based tool that has been developed to provide various visualizations for interpreting eye movement data of static stimuli. The tool provides several techniques to visualize and analyze eye movement data. These visualizations are interactive and linked in a coordinated way to help gain more insights. Overall, this paper illustrates the features and functionality offered by the tool by using data recorded from transport map readers in a previously conducted experiment as use case. Furthermore, the paper discusses limitations of the tool and possible future developments.
眼动数据可以用于市场营销、广告和其他设计相关行业的各种研究,以获得有关客户偏好的有趣见解。然而,由于其时空复杂性,解释这些数据可能是一项具有挑战性的任务。在本文中,我们描述了一个基于网络的工具,该工具已经开发出来,可以为解释静态刺激的眼球运动数据提供各种可视化。该工具提供了几种可视化和分析眼动数据的技术。这些可视化是交互式的,以协调的方式链接在一起,以帮助获得更多的见解。总的来说,本文通过使用在先前进行的实验中从运输地图阅读器中记录的数据作为用例来说明该工具提供的特性和功能。此外,本文还讨论了该工具的局限性和可能的未来发展。
{"title":"Linked and Coordinated Visual Analysis of Eye Movement Data","authors":"Michael Burch, Günter Wallner, Veerle Fürst, Teodor-Cristian Lungu, Daan Boelhouwers, Dhiksha Rajasekaran, Richard Farla, Sander van Heesch","doi":"10.1145/3517031.3531163","DOIUrl":"https://doi.org/10.1145/3517031.3531163","url":null,"abstract":"Eye movement data can be used for a variety of research in marketing, advertisement, and other design-related industries to gain interesting insights into customer preferences. However, interpreting such data can be a challenging task due to its spatio-temporal complexity. In this paper we describe a web-based tool that has been developed to provide various visualizations for interpreting eye movement data of static stimuli. The tool provides several techniques to visualize and analyze eye movement data. These visualizations are interactive and linked in a coordinated way to help gain more insights. Overall, this paper illustrates the features and functionality offered by the tool by using data recorded from transport map readers in a previously conducted experiment as use case. Furthermore, the paper discusses limitations of the tool and possible future developments.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116846308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distance between gaze and laser pointer predicts performance in video-based e-learning independent of the presence of an on-screen instructor 凝视和激光笔之间的距离可以预测基于视频的电子学习的表现,而不依赖于屏幕上的讲师
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529620
Marian Sauter, Tobias Wagner, A. Huckauf
In online lectures, showing an on-screen instructor gained popularity amidst the Covid-19 pandemic. However, evidence in favor of this is mixed: they draw attention and may distract from the content. In contrast, using signaling (e.g., with a digital pointer) provides known benefits for learners. But effects of signaling were only researched in absence of an on-screen instructor. In the present explorative study, we investigated effects of an on-screen instructor on the division of learners´ attention; specifically, on following a digital pointer signal with their gaze. The presence of an instructor led to an increased number of fixations in the presenter area. This did neither affect learning outcomes nor gaze patterns following the pointer. The average distance between the learner's gaze and the pointer position predicts the student's quiz performance, independent of the presence of an on-screen instructor. This can also help in creating automated immediate-feedback systems for educational videos.
在新型冠状病毒感染症(Covid-19)大流行的情况下,在网络授课中,在屏幕上展示讲师的方式越来越受欢迎。然而,支持这一点的证据是复杂的:它们吸引注意力,可能分散人们对内容的注意力。相比之下,使用信号(例如,使用数字指针)为学习者提供了已知的好处。但是,只有在没有屏幕上的教练的情况下,才研究了信号的影响。在本探索性研究中,我们调查了屏幕讲师对学习者注意力分配的影响;具体来说,他们的目光跟随一个数字指针信号。讲师的出现导致演讲人区域的注视增加。这既不影响学习结果,也不影响跟随指针的注视模式。学习者的目光和指针位置之间的平均距离预测了学生的测验成绩,而不依赖于屏幕上的讲师的存在。这也有助于为教育视频创建自动即时反馈系统。
{"title":"Distance between gaze and laser pointer predicts performance in video-based e-learning independent of the presence of an on-screen instructor","authors":"Marian Sauter, Tobias Wagner, A. Huckauf","doi":"10.1145/3517031.3529620","DOIUrl":"https://doi.org/10.1145/3517031.3529620","url":null,"abstract":"In online lectures, showing an on-screen instructor gained popularity amidst the Covid-19 pandemic. However, evidence in favor of this is mixed: they draw attention and may distract from the content. In contrast, using signaling (e.g., with a digital pointer) provides known benefits for learners. But effects of signaling were only researched in absence of an on-screen instructor. In the present explorative study, we investigated effects of an on-screen instructor on the division of learners´ attention; specifically, on following a digital pointer signal with their gaze. The presence of an instructor led to an increased number of fixations in the presenter area. This did neither affect learning outcomes nor gaze patterns following the pointer. The average distance between the learner's gaze and the pointer position predicts the student's quiz performance, independent of the presence of an on-screen instructor. This can also help in creating automated immediate-feedback systems for educational videos.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128211540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A study on the generalizability of Oculomotor Plant Mathematical Model 眼动植物数学模型的泛化性研究
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3532523
Dmytro Katrychuk, Oleg V. Komogortsev
The Oculomotor plant mathematical model (OPMM) is a dynamic system that describes a human eye in motion. In this study, we focus on an anatomically inspired homeomorphic model where every component is a mathematical representation of a certain biological phenomenon of a real oculomotor plant. This approach estimates internal state of oculomotor plant from recorded eye movements. In the past, the utility of such models was shown to be useful in biometrics and gaze contingent rendering via eye movement prediction. In previous studies, an implicit underlying assumption was that a set of parameters estimated for a certain subject should remain consistent in time and generalize to unseen data. We note a major drawback of the prior work, as it operated under this assumption without explicit validation. This work creates a quantifiable baseline for the specific OPMM where the generalizability of the model parameters is the foundational property of their estimation.
眼动植物数学模型(OPMM)是描述人眼运动的动态系统。在这项研究中,我们专注于一个解剖学启发的同胚模型,其中每个成分都是一个真实动眼植物的某种生物现象的数学表示。这种方法通过记录 眼球运动来估计动眼植物的内部状态。在过去,这些模型被证明在生物识别和通过眼球运动预测的注视偶然渲染中是有用的。在以前的研究中,一个隐含的潜在假设是,对某一主题估计的一组参数应该在时间上保持一致,并推广到未见过的数据。我们注意到先前工作的一个主要缺点,因为它在没有明确验证的假设下运行。这项工作为特定的OPMM创建了一个可量化的基线,其中模型参数的可泛化性是其估计的基本属性。
{"title":"A study on the generalizability of Oculomotor Plant Mathematical Model","authors":"Dmytro Katrychuk, Oleg V. Komogortsev","doi":"10.1145/3517031.3532523","DOIUrl":"https://doi.org/10.1145/3517031.3532523","url":null,"abstract":"The Oculomotor plant mathematical model (OPMM) is a dynamic system that describes a human eye in motion. In this study, we focus on an anatomically inspired homeomorphic model where every component is a mathematical representation of a certain biological phenomenon of a real oculomotor plant. This approach estimates internal state of oculomotor plant from recorded eye movements. In the past, the utility of such models was shown to be useful in biometrics and gaze contingent rendering via eye movement prediction. In previous studies, an implicit underlying assumption was that a set of parameters estimated for a certain subject should remain consistent in time and generalize to unseen data. We note a major drawback of the prior work, as it operated under this assumption without explicit validation. This work creates a quantifiable baseline for the specific OPMM where the generalizability of the model parameters is the foundational property of their estimation.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116436782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Characterizing the expertise of Aircraft Maintenance Technicians using eye-tracking. 利用眼动追踪技术表征飞机维修技术人员的专业知识。
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3532199
F. Paris, Remy Casanova, M. Bergeonneau, D. Mestre
Aircraft maintenance technicians (AMTs) play an essential role in life-long security of helicopters. There are two major types of operations in maintenance activity: information intake/processing and motor actions. Modeling expertise of the AMT is the main objective of this doctoral project. Given the constraints of real-world research, mobile eye-tracking appears to be an essential tool for the measurement of information intake, notably concerning the use of maintenance documentation during the maintenance task preparation and execution. . This extended abstract will present the main research objectives, our approach and methodology and some preliminary results.
飞机维修技术人员对直升机的终身安全起着至关重要的作用。在维护活动中有两种主要类型的操作:信息摄取/处理和运动动作。AMT的建模专业知识是本博士项目的主要目标。考虑到现实世界研究的限制,移动眼动追踪似乎是测量信息摄入的重要工具,特别是在维护任务准备和执行过程中使用维护文档。这个扩展摘要将介绍主要的研究目标,我们的方法和方法以及一些初步的结果。
{"title":"Characterizing the expertise of Aircraft Maintenance Technicians using eye-tracking.","authors":"F. Paris, Remy Casanova, M. Bergeonneau, D. Mestre","doi":"10.1145/3517031.3532199","DOIUrl":"https://doi.org/10.1145/3517031.3532199","url":null,"abstract":"Aircraft maintenance technicians (AMTs) play an essential role in life-long security of helicopters. There are two major types of operations in maintenance activity: information intake/processing and motor actions. Modeling expertise of the AMT is the main objective of this doctoral project. Given the constraints of real-world research, mobile eye-tracking appears to be an essential tool for the measurement of information intake, notably concerning the use of maintenance documentation during the maintenance task preparation and execution. . This extended abstract will present the main research objectives, our approach and methodology and some preliminary results.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122351282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 Symposium on Eye Tracking Research and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1