首页 > 最新文献

2022 Symposium on Eye Tracking Research and Applications最新文献

英文 中文
Visualizing Instructor’s Gaze Information for Online Video-based Learning: Preliminary Study 在线视频学习中教师注视信息的可视化:初步研究
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529238
Daun Kim, Jae-Yeop Jeong, Sumin Hong, Namsub Kim, Jin-Woo Jeong
Video-based online educational content has been more popular nowadays. However, due to the limited communication and interaction between the learners and instructors, various problems regarding learning performance have occurred. Gaze sharing techniques received much attention as a means to address this problem, however, there still exists a lot of room for improvement. In this work-in-progress paper, we introduce some possible improvement points regarding gaze visualization strategies and report the preliminary results of our first step towards our final goal. Through a user study with 30 university students, we found the feasibility of the prototype system and the future directions of our research.
如今,基于视频的在线教育内容更受欢迎。然而,由于学习者和教师之间的交流和互动有限,出现了各种学习绩效问题。注视共享技术作为解决这一问题的一种手段受到了广泛关注,然而,仍有很大的改进空间。在这篇正在进行的论文中,我们介绍了关于凝视可视化策略的一些可能的改进点,并报告了我们朝着最终目标迈出的第一步的初步结果。通过对30名大学生的用户研究,我们发现了原型系统的可行性和未来的研究方向。
{"title":"Visualizing Instructor’s Gaze Information for Online Video-based Learning: Preliminary Study","authors":"Daun Kim, Jae-Yeop Jeong, Sumin Hong, Namsub Kim, Jin-Woo Jeong","doi":"10.1145/3517031.3529238","DOIUrl":"https://doi.org/10.1145/3517031.3529238","url":null,"abstract":"Video-based online educational content has been more popular nowadays. However, due to the limited communication and interaction between the learners and instructors, various problems regarding learning performance have occurred. Gaze sharing techniques received much attention as a means to address this problem, however, there still exists a lot of room for improvement. In this work-in-progress paper, we introduce some possible improvement points regarding gaze visualization strategies and report the preliminary results of our first step towards our final goal. Through a user study with 30 university students, we found the feasibility of the prototype system and the future directions of our research.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116660001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mind Wandering Trait-level Tendencies During Lecture Viewing: A Pilot Study 观看讲座时走神特质水平的倾向:一项初步研究
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529241
Francesca Zermiani, A. Bulling, M. Wirzberger
Mind wandering (MW) is defined as a shift of attention to task-unrelated internal thoughts that is pervasive and disruptive for learning performance. Current state-of-the-art gaze-based attention-aware intelligent systems are capable of detecting MW from eye movements and delivering interventions to mitigate its negative effects. However, the beneficial functions of MW and its trait-level tendency, defined as the content of MW experience, are still largely neglected by these systems. In this pilot study, we address the questions of whether different MW trait-level tendencies can be detected through off-screen fixations’ frequency and duration and blink rate during a lecture viewing task. We focus on prospective planning and creative problem-solving as two of the main MW trait-level tendencies. Despite the non-significance, the descriptive values show a higher frequency and duration of off-screen fixations, but lower blink rate, in the creative problem-solving MW condition. Interestingly, we do find a highly significant correlation between MW level and engagement scores in the prospective planning MW group. Potential explanations for the observed results are discussed. Overall, these findings represent a preliminary step towards the development of more accurate and adaptive learning technologies, and call for further studies on MW trait-level tendency detection.
走神被定义为一种注意力转移到与任务无关的内部思想上的现象,这种现象普遍存在,并对学习绩效产生破坏性影响。目前最先进的基于注视的注意力感知智能系统能够从眼球运动中检测到MW,并提供干预措施以减轻其负面影响。然而,这些系统在很大程度上仍然忽略了MW的有益功能及其特质水平倾向,即MW经验的内容。在本初步研究中,我们探讨了在讲座观看任务中,是否可以通过离屏注视的频率、持续时间和眨眼频率来检测不同的脑容量特征水平倾向。我们将前瞻性规划和创造性解决问题作为两个主要的MW特质水平倾向。尽管没有显著性,但描述性值显示,在创造性解决问题的MW条件下,离屏注视的频率和持续时间更高,但眨眼率更低。有趣的是,我们确实发现,在前瞻性规划的MW组中,MW水平与敬业度得分之间存在高度显著的相关性。对观测结果的可能解释进行了讨论。总的来说,这些发现为开发更准确和自适应的学习技术迈出了初步的一步,并呼吁进一步研究MW特征水平的趋势检测。
{"title":"Mind Wandering Trait-level Tendencies During Lecture Viewing: A Pilot Study","authors":"Francesca Zermiani, A. Bulling, M. Wirzberger","doi":"10.1145/3517031.3529241","DOIUrl":"https://doi.org/10.1145/3517031.3529241","url":null,"abstract":"Mind wandering (MW) is defined as a shift of attention to task-unrelated internal thoughts that is pervasive and disruptive for learning performance. Current state-of-the-art gaze-based attention-aware intelligent systems are capable of detecting MW from eye movements and delivering interventions to mitigate its negative effects. However, the beneficial functions of MW and its trait-level tendency, defined as the content of MW experience, are still largely neglected by these systems. In this pilot study, we address the questions of whether different MW trait-level tendencies can be detected through off-screen fixations’ frequency and duration and blink rate during a lecture viewing task. We focus on prospective planning and creative problem-solving as two of the main MW trait-level tendencies. Despite the non-significance, the descriptive values show a higher frequency and duration of off-screen fixations, but lower blink rate, in the creative problem-solving MW condition. Interestingly, we do find a highly significant correlation between MW level and engagement scores in the prospective planning MW group. Potential explanations for the observed results are discussed. Overall, these findings represent a preliminary step towards the development of more accurate and adaptive learning technologies, and call for further studies on MW trait-level tendency detection.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128902757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Faster, Better Blink Detection through Curriculum Learning by Augmentation 更快,更好的眨眼检测通过增强课程学习
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529617
A. Al-Hindawi, Marcela P. Vizcaychipi, Y. Demiris
Blinking is a useful biological signal that can gate gaze regression models to avoid the use of incorrect data in downstream tasks. Existing datasets are imbalanced both in frequency of class but also in intra-class difficulty which we demonstrate is a barrier for curriculum learning. We thus propose a novel curriculum augmentation scheme that aims to address frequency and difficulty imbalances implicitly which are are terming Curriculum Learning by Augmentation (CLbA). Using Curriculum Learning by Augmentation (CLbA), we achieve a state-of-the-art performance of mean Average Precision (mAP) 0.971 using ResNet-18 up from the previous state-of-the-art of mean Average Precision (mAP) of 0.757 using DenseNet-121 whilst outcompeting Curriculum Learning by Bootstrapping (CLbB) by a significant margin with improved calibration. This new training scheme thus allows the use of smaller and more performant Convolutional Neural Network (CNN) backbones fulfilling Nyquist criteria to achieve a sampling frequency of 102.3Hz. This paves the way for inference of blinking in real-time applications.
眨眼是一种有用的生物信号,它可以控制注视回归模型,避免在后续任务中使用错误的数据。现有的数据集在课堂频率和课堂难度上都是不平衡的,我们证明这是课程学习的障碍。因此,我们提出了一种新的课程增强方案,旨在解决频率和难度的不平衡,这些不平衡被称为增强课程学习(CLbA)。使用增强课程学习(CLbA),我们使用ResNet-18实现了平均平均精度(mAP) 0.971的最先进性能,高于之前使用DenseNet-121的平均平均精度(mAP) 0.757的最先进性能,同时通过改进的校准大大优于通过引导的课程学习(CLbB)。因此,这种新的训练方案允许使用更小、更高性能的卷积神经网络(CNN)骨干网,满足奈奎斯特标准,以实现102.3Hz的采样频率。这为实时应用中的眨眼推理铺平了道路。
{"title":"Faster, Better Blink Detection through Curriculum Learning by Augmentation","authors":"A. Al-Hindawi, Marcela P. Vizcaychipi, Y. Demiris","doi":"10.1145/3517031.3529617","DOIUrl":"https://doi.org/10.1145/3517031.3529617","url":null,"abstract":"Blinking is a useful biological signal that can gate gaze regression models to avoid the use of incorrect data in downstream tasks. Existing datasets are imbalanced both in frequency of class but also in intra-class difficulty which we demonstrate is a barrier for curriculum learning. We thus propose a novel curriculum augmentation scheme that aims to address frequency and difficulty imbalances implicitly which are are terming Curriculum Learning by Augmentation (CLbA). Using Curriculum Learning by Augmentation (CLbA), we achieve a state-of-the-art performance of mean Average Precision (mAP) 0.971 using ResNet-18 up from the previous state-of-the-art of mean Average Precision (mAP) of 0.757 using DenseNet-121 whilst outcompeting Curriculum Learning by Bootstrapping (CLbB) by a significant margin with improved calibration. This new training scheme thus allows the use of smaller and more performant Convolutional Neural Network (CNN) backbones fulfilling Nyquist criteria to achieve a sampling frequency of 102.3Hz. This paves the way for inference of blinking in real-time applications.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117054951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Inferring Native and Non-Native Human Reading Comprehension and Subjective Text Difficulty from Scanpaths in Reading 从阅读扫描路径推断母语和非母语人类阅读理解和主观文本难度
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529639
David Reich, Paul Prasse, Chiara Tschirner, Patrick Haller, Frank Goldhammer, L. Jäger
Eye movements in reading are known to reflect cognitive processes involved in reading comprehension at all linguistic levels, from the sub-lexical to the discourse level. This means that reading comprehension and other properties of the text and/or the reader should be possible to infer from eye movements. Consequently, we develop the first neural sequence architecture for this type of tasks which models scan paths in reading and incorporates lexical, semantic and other linguistic features of the stimulus text. Our proposed model outperforms state-of-the-art models in various tasks. These include inferring reading comprehension or text difficulty, and assessing whether the reader is a native speaker of the text’s language. We further conduct an ablation study to investigate the impact of each component of our proposed neural network on its performance.
阅读中的眼动反映了从亚词汇到语篇的各个语言层面的阅读理解认知过程。这意味着阅读理解和文本和/或读者的其他属性应该可以从眼球运动中推断出来。因此,我们为这类任务开发了第一个神经序列架构,该架构模拟了阅读中的扫描路径,并结合了刺激文本的词汇、语义和其他语言特征。我们提出的模型在各种任务中优于最先进的模型。这些包括推断阅读理解或文本难度,以及评估读者是否是文本语言的母语人士。我们进一步进行消融研究,以研究我们提出的神经网络的每个组成部分对其性能的影响。
{"title":"Inferring Native and Non-Native Human Reading Comprehension and Subjective Text Difficulty from Scanpaths in Reading","authors":"David Reich, Paul Prasse, Chiara Tschirner, Patrick Haller, Frank Goldhammer, L. Jäger","doi":"10.1145/3517031.3529639","DOIUrl":"https://doi.org/10.1145/3517031.3529639","url":null,"abstract":"Eye movements in reading are known to reflect cognitive processes involved in reading comprehension at all linguistic levels, from the sub-lexical to the discourse level. This means that reading comprehension and other properties of the text and/or the reader should be possible to infer from eye movements. Consequently, we develop the first neural sequence architecture for this type of tasks which models scan paths in reading and incorporates lexical, semantic and other linguistic features of the stimulus text. Our proposed model outperforms state-of-the-art models in various tasks. These include inferring reading comprehension or text difficulty, and assessing whether the reader is a native speaker of the text’s language. We further conduct an ablation study to investigate the impact of each component of our proposed neural network on its performance.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"190 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116979076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Comparison of Webcam and Remote Eye Tracking 网络摄像头与远程眼动追踪的比较
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529615
K. Wisiecka, Krzysztof Krejtz, I. Krejtz, Damian Sromek, Adam Cellary, Beata Lewandowska, A. Duchowski
We compare the measurement error and validity of webcam-based eye tracking to that of a remote eye tracker as well as software integration of both. We ran a study with n = 83 participants, consisting of a point detection task and an emotional visual search task under three between-subjects experimental conditions (webcam-based, remote, and integrated). We analyzed location-based (e.g., fixations) and process-based eye tracking metrics (ambient-focal attention dynamics). Despite higher measurement error of webcam eye tracking, our results in all three experimental conditions were in line with theoretical expectations. For example, time to first fixation toward happy faces was significantly shorter than toward sad faces (the happiness-superiority effect). As expected, we also observed the switch from ambient to focal attention depending on complexity of the visual stimuli. We conclude that webcam-based eye tracking is a viable, low-cost alternative to remote eye tracking.
我们比较了基于网络摄像头的眼动追踪与远程眼动追踪的测量误差和有效性,以及两者的软件集成。我们进行了一项n = 83名参与者的研究,包括在三种受试者之间的实验条件下(基于网络摄像头、远程和集成)进行点检测任务和情感视觉搜索任务。我们分析了基于位置(例如,注视)和基于过程的眼动追踪指标(环境焦点注意力动力学)。尽管网络摄像头眼动追踪的测量误差较高,但我们在三种实验条件下的结果都符合理论预期。例如,第一次注视快乐面孔的时间明显短于第一次注视悲伤面孔的时间(快乐优势效应)。正如预期的那样,我们还观察到根据视觉刺激的复杂性,从环境注意力到焦点注意力的转换。我们得出结论,基于网络摄像头的眼动追踪是一种可行的、低成本的远程眼动追踪替代方案。
{"title":"Comparison of Webcam and Remote Eye Tracking","authors":"K. Wisiecka, Krzysztof Krejtz, I. Krejtz, Damian Sromek, Adam Cellary, Beata Lewandowska, A. Duchowski","doi":"10.1145/3517031.3529615","DOIUrl":"https://doi.org/10.1145/3517031.3529615","url":null,"abstract":"We compare the measurement error and validity of webcam-based eye tracking to that of a remote eye tracker as well as software integration of both. We ran a study with n = 83 participants, consisting of a point detection task and an emotional visual search task under three between-subjects experimental conditions (webcam-based, remote, and integrated). We analyzed location-based (e.g., fixations) and process-based eye tracking metrics (ambient-focal attention dynamics). Despite higher measurement error of webcam eye tracking, our results in all three experimental conditions were in line with theoretical expectations. For example, time to first fixation toward happy faces was significantly shorter than toward sad faces (the happiness-superiority effect). As expected, we also observed the switch from ambient to focal attention depending on complexity of the visual stimuli. We conclude that webcam-based eye tracking is a viable, low-cost alternative to remote eye tracking.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132687293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
A study on the generalizability of Oculomotor Plant Mathematical Model 眼动植物数学模型的泛化性研究
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3532523
Dmytro Katrychuk, Oleg V. Komogortsev
The Oculomotor plant mathematical model (OPMM) is a dynamic system that describes a human eye in motion. In this study, we focus on an anatomically inspired homeomorphic model where every component is a mathematical representation of a certain biological phenomenon of a real oculomotor plant. This approach estimates internal state of oculomotor plant from recorded eye movements. In the past, the utility of such models was shown to be useful in biometrics and gaze contingent rendering via eye movement prediction. In previous studies, an implicit underlying assumption was that a set of parameters estimated for a certain subject should remain consistent in time and generalize to unseen data. We note a major drawback of the prior work, as it operated under this assumption without explicit validation. This work creates a quantifiable baseline for the specific OPMM where the generalizability of the model parameters is the foundational property of their estimation.
眼动植物数学模型(OPMM)是描述人眼运动的动态系统。在这项研究中,我们专注于一个解剖学启发的同胚模型,其中每个成分都是一个真实动眼植物的某种生物现象的数学表示。这种方法通过记录 眼球运动来估计动眼植物的内部状态。在过去,这些模型被证明在生物识别和通过眼球运动预测的注视偶然渲染中是有用的。在以前的研究中,一个隐含的潜在假设是,对某一主题估计的一组参数应该在时间上保持一致,并推广到未见过的数据。我们注意到先前工作的一个主要缺点,因为它在没有明确验证的假设下运行。这项工作为特定的OPMM创建了一个可量化的基线,其中模型参数的可泛化性是其估计的基本属性。
{"title":"A study on the generalizability of Oculomotor Plant Mathematical Model","authors":"Dmytro Katrychuk, Oleg V. Komogortsev","doi":"10.1145/3517031.3532523","DOIUrl":"https://doi.org/10.1145/3517031.3532523","url":null,"abstract":"The Oculomotor plant mathematical model (OPMM) is a dynamic system that describes a human eye in motion. In this study, we focus on an anatomically inspired homeomorphic model where every component is a mathematical representation of a certain biological phenomenon of a real oculomotor plant. This approach estimates internal state of oculomotor plant from recorded eye movements. In the past, the utility of such models was shown to be useful in biometrics and gaze contingent rendering via eye movement prediction. In previous studies, an implicit underlying assumption was that a set of parameters estimated for a certain subject should remain consistent in time and generalize to unseen data. We note a major drawback of the prior work, as it operated under this assumption without explicit validation. This work creates a quantifiable baseline for the specific OPMM where the generalizability of the model parameters is the foundational property of their estimation.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116436782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Linked and Coordinated Visual Analysis of Eye Movement Data 眼动数据的关联和协调视觉分析
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3531163
Michael Burch, Günter Wallner, Veerle Fürst, Teodor-Cristian Lungu, Daan Boelhouwers, Dhiksha Rajasekaran, Richard Farla, Sander van Heesch
Eye movement data can be used for a variety of research in marketing, advertisement, and other design-related industries to gain interesting insights into customer preferences. However, interpreting such data can be a challenging task due to its spatio-temporal complexity. In this paper we describe a web-based tool that has been developed to provide various visualizations for interpreting eye movement data of static stimuli. The tool provides several techniques to visualize and analyze eye movement data. These visualizations are interactive and linked in a coordinated way to help gain more insights. Overall, this paper illustrates the features and functionality offered by the tool by using data recorded from transport map readers in a previously conducted experiment as use case. Furthermore, the paper discusses limitations of the tool and possible future developments.
眼动数据可以用于市场营销、广告和其他设计相关行业的各种研究,以获得有关客户偏好的有趣见解。然而,由于其时空复杂性,解释这些数据可能是一项具有挑战性的任务。在本文中,我们描述了一个基于网络的工具,该工具已经开发出来,可以为解释静态刺激的眼球运动数据提供各种可视化。该工具提供了几种可视化和分析眼动数据的技术。这些可视化是交互式的,以协调的方式链接在一起,以帮助获得更多的见解。总的来说,本文通过使用在先前进行的实验中从运输地图阅读器中记录的数据作为用例来说明该工具提供的特性和功能。此外,本文还讨论了该工具的局限性和可能的未来发展。
{"title":"Linked and Coordinated Visual Analysis of Eye Movement Data","authors":"Michael Burch, Günter Wallner, Veerle Fürst, Teodor-Cristian Lungu, Daan Boelhouwers, Dhiksha Rajasekaran, Richard Farla, Sander van Heesch","doi":"10.1145/3517031.3531163","DOIUrl":"https://doi.org/10.1145/3517031.3531163","url":null,"abstract":"Eye movement data can be used for a variety of research in marketing, advertisement, and other design-related industries to gain interesting insights into customer preferences. However, interpreting such data can be a challenging task due to its spatio-temporal complexity. In this paper we describe a web-based tool that has been developed to provide various visualizations for interpreting eye movement data of static stimuli. The tool provides several techniques to visualize and analyze eye movement data. These visualizations are interactive and linked in a coordinated way to help gain more insights. Overall, this paper illustrates the features and functionality offered by the tool by using data recorded from transport map readers in a previously conducted experiment as use case. Furthermore, the paper discusses limitations of the tool and possible future developments.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116846308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Characterizing the expertise of Aircraft Maintenance Technicians using eye-tracking. 利用眼动追踪技术表征飞机维修技术人员的专业知识。
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3532199
F. Paris, Remy Casanova, M. Bergeonneau, D. Mestre
Aircraft maintenance technicians (AMTs) play an essential role in life-long security of helicopters. There are two major types of operations in maintenance activity: information intake/processing and motor actions. Modeling expertise of the AMT is the main objective of this doctoral project. Given the constraints of real-world research, mobile eye-tracking appears to be an essential tool for the measurement of information intake, notably concerning the use of maintenance documentation during the maintenance task preparation and execution. . This extended abstract will present the main research objectives, our approach and methodology and some preliminary results.
飞机维修技术人员对直升机的终身安全起着至关重要的作用。在维护活动中有两种主要类型的操作:信息摄取/处理和运动动作。AMT的建模专业知识是本博士项目的主要目标。考虑到现实世界研究的限制,移动眼动追踪似乎是测量信息摄入的重要工具,特别是在维护任务准备和执行过程中使用维护文档。这个扩展摘要将介绍主要的研究目标,我们的方法和方法以及一些初步的结果。
{"title":"Characterizing the expertise of Aircraft Maintenance Technicians using eye-tracking.","authors":"F. Paris, Remy Casanova, M. Bergeonneau, D. Mestre","doi":"10.1145/3517031.3532199","DOIUrl":"https://doi.org/10.1145/3517031.3532199","url":null,"abstract":"Aircraft maintenance technicians (AMTs) play an essential role in life-long security of helicopters. There are two major types of operations in maintenance activity: information intake/processing and motor actions. Modeling expertise of the AMT is the main objective of this doctoral project. Given the constraints of real-world research, mobile eye-tracking appears to be an essential tool for the measurement of information intake, notably concerning the use of maintenance documentation during the maintenance task preparation and execution. . This extended abstract will present the main research objectives, our approach and methodology and some preliminary results.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122351282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distance between gaze and laser pointer predicts performance in video-based e-learning independent of the presence of an on-screen instructor 凝视和激光笔之间的距离可以预测基于视频的电子学习的表现,而不依赖于屏幕上的讲师
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529620
Marian Sauter, Tobias Wagner, A. Huckauf
In online lectures, showing an on-screen instructor gained popularity amidst the Covid-19 pandemic. However, evidence in favor of this is mixed: they draw attention and may distract from the content. In contrast, using signaling (e.g., with a digital pointer) provides known benefits for learners. But effects of signaling were only researched in absence of an on-screen instructor. In the present explorative study, we investigated effects of an on-screen instructor on the division of learners´ attention; specifically, on following a digital pointer signal with their gaze. The presence of an instructor led to an increased number of fixations in the presenter area. This did neither affect learning outcomes nor gaze patterns following the pointer. The average distance between the learner's gaze and the pointer position predicts the student's quiz performance, independent of the presence of an on-screen instructor. This can also help in creating automated immediate-feedback systems for educational videos.
在新型冠状病毒感染症(Covid-19)大流行的情况下,在网络授课中,在屏幕上展示讲师的方式越来越受欢迎。然而,支持这一点的证据是复杂的:它们吸引注意力,可能分散人们对内容的注意力。相比之下,使用信号(例如,使用数字指针)为学习者提供了已知的好处。但是,只有在没有屏幕上的教练的情况下,才研究了信号的影响。在本探索性研究中,我们调查了屏幕讲师对学习者注意力分配的影响;具体来说,他们的目光跟随一个数字指针信号。讲师的出现导致演讲人区域的注视增加。这既不影响学习结果,也不影响跟随指针的注视模式。学习者的目光和指针位置之间的平均距离预测了学生的测验成绩,而不依赖于屏幕上的讲师的存在。这也有助于为教育视频创建自动即时反馈系统。
{"title":"Distance between gaze and laser pointer predicts performance in video-based e-learning independent of the presence of an on-screen instructor","authors":"Marian Sauter, Tobias Wagner, A. Huckauf","doi":"10.1145/3517031.3529620","DOIUrl":"https://doi.org/10.1145/3517031.3529620","url":null,"abstract":"In online lectures, showing an on-screen instructor gained popularity amidst the Covid-19 pandemic. However, evidence in favor of this is mixed: they draw attention and may distract from the content. In contrast, using signaling (e.g., with a digital pointer) provides known benefits for learners. But effects of signaling were only researched in absence of an on-screen instructor. In the present explorative study, we investigated effects of an on-screen instructor on the division of learners´ attention; specifically, on following a digital pointer signal with their gaze. The presence of an instructor led to an increased number of fixations in the presenter area. This did neither affect learning outcomes nor gaze patterns following the pointer. The average distance between the learner's gaze and the pointer position predicts the student's quiz performance, independent of the presence of an on-screen instructor. This can also help in creating automated immediate-feedback systems for educational videos.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128211540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Scanpath Comparison using ScanGraph for Education and Learning Purposes: Summary of previous educational studies performed with the use of ScanGraph 使用ScanGraph进行教育和学习目的的扫描路径比较:使用ScanGraph进行以前的教育研究的总结
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529243
S. Popelka, Marketa Beitlova
∗ The short paper describes the summary of previous studies from the area of education where a developed tool for scanpath comparison called ScanGraph has been used so far. This paper aims to introduce this freely available online tool to the community of eye movement researchers focusing on eye-tracking in education. ScanGraph allows calculation of similarity using Levenshtein and Damerau-Levenshtein algorithms and the Needleman-Wunsch algorithm (similar to ScanMatch). The results are visualized in a simple graph showing similarities among individual participants. The tool allows exporting similarity matrix, which might be further used for more detailed analysis. Moreover, it is possible to visualize similarity data calculated using the MultiMatch method. In the article, the tool’s functionality is described and introduced on the examples of case studies from the field of geographic education and physics.
这篇短文描述了教育领域先前研究的总结,在教育领域,一种叫做ScanGraph的扫描路径比较工具已经被使用到目前为止。本文旨在将这个免费的在线工具介绍给关注眼动追踪教育的眼动研究人员。ScanGraph允许使用Levenshtein和Damerau-Levenshtein算法和Needleman-Wunsch算法(类似于ScanMatch)计算相似性。结果用一个简单的图表显示了个体参与者之间的相似性。该工具允许导出相似性矩阵,可以进一步用于更详细的分析。此外,还可以将使用multimmatch方法计算的相似度数据可视化。本文以地理教育和物理领域的案例研究为例,对该工具的功能进行了描述和介绍。
{"title":"Scanpath Comparison using ScanGraph for Education and Learning Purposes: Summary of previous educational studies performed with the use of ScanGraph","authors":"S. Popelka, Marketa Beitlova","doi":"10.1145/3517031.3529243","DOIUrl":"https://doi.org/10.1145/3517031.3529243","url":null,"abstract":"∗ The short paper describes the summary of previous studies from the area of education where a developed tool for scanpath comparison called ScanGraph has been used so far. This paper aims to introduce this freely available online tool to the community of eye movement researchers focusing on eye-tracking in education. ScanGraph allows calculation of similarity using Levenshtein and Damerau-Levenshtein algorithms and the Needleman-Wunsch algorithm (similar to ScanMatch). The results are visualized in a simple graph showing similarities among individual participants. The tool allows exporting similarity matrix, which might be further used for more detailed analysis. Moreover, it is possible to visualize similarity data calculated using the MultiMatch method. In the article, the tool’s functionality is described and introduced on the examples of case studies from the field of geographic education and physics.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114268302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2022 Symposium on Eye Tracking Research and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1