首页 > 最新文献

Journal of Eye Movement Research最新文献

英文 中文
Angular Offset Distributions During Fixation Are, More Often Than Not, Multimodal. 固定过程中的角度偏移分布往往是多模态的。
IF 2.1 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2021-06-03 DOI: 10.16910/jemr.14.3.2
Lee Friedman, Dillon Lohr, Timothy Hanson, Oleg V Komogortsev

Typically, the position error of an eye-tracking device is measured as the distance of the eye-position from the target position in two-dimensional space (angular offset). Accuracy is the mean angular offset. The mean is a highly interpretable measure of central tendency if the underlying error distribution is unimodal and normal. However, in the context of an underlying multimodal distribution, the mean is less interpretable. We will present evidence that the majority of such distributions are multimodal. Only 14.7% of fixation angular offset distributions were unimodal, and of these, only 11.5% were normally distributed. (Of the entire dataset, 1.7% were unimodal and normal.) This multimodality is true even if there is only a single, continuous tracking fixation segment per trial. We present several approaches to measure accuracy in the face of multimodality. We also address the role of fixation drift in partially explaining multimodality.

通常情况下,眼动追踪设备的位置误差是根据眼球位置与目标位置在二维空间中的距离(角度偏移)来测量的。精度就是平均角度偏移。如果底层误差分布是单峰正态分布,那么平均值就是一种高度可解释的中心倾向测量值。然而,在底层多模态分布的情况下,平均值的可解释性较差。我们将提出证据,证明大多数此类分布都是多模态分布。只有 14.7% 的固定角偏移分布是单模态分布,其中只有 11.5% 是正态分布。(在整个数据集中,有 1.7% 是单模态和正态分布。)即使每次试验只有一个连续的跟踪固定片段,这种多模态分布也是真实的。我们介绍了几种在多模态情况下测量准确度的方法。我们还探讨了固定漂移在部分解释多模态性中的作用。
{"title":"Angular Offset Distributions During Fixation Are, More Often Than Not, Multimodal.","authors":"Lee Friedman, Dillon Lohr, Timothy Hanson, Oleg V Komogortsev","doi":"10.16910/jemr.14.3.2","DOIUrl":"10.16910/jemr.14.3.2","url":null,"abstract":"<p><p>Typically, the position error of an eye-tracking device is measured as the distance of the eye-position from the target position in two-dimensional space (angular offset). Accuracy is the mean angular offset. The mean is a highly interpretable measure of central tendency if the underlying error distribution is unimodal and normal. However, in the context of an underlying multimodal distribution, the mean is less interpretable. We will present evidence that the majority of such distributions are multimodal. Only 14.7% of fixation angular offset distributions were unimodal, and of these, only 11.5% were normally distributed. (Of the entire dataset, 1.7% were unimodal and normal.) This multimodality is true even if there is only a single, continuous tracking fixation segment per trial. We present several approaches to measure accuracy in the face of multimodality. We also address the role of fixation drift in partially explaining multimodality.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 3","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8189800/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39010217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Eye movements in real and simulated driving and navigation control - Foreword to the Special Issue. 真实和模拟驾驶及导航控制中的眼动 - 特刊前言。
IF 2.1 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2021-06-03 DOI: 10.16910/jemr.12.3.0
Rudolf Groner, Enkelejda Kasneci
<p><p>The control of technological systems by human operators has been the object of study for many decades. The increasing complexity in the digital age has made the optimization of the interaction between system and human operator particularly necessary. In the present thematic issue, ten exemplary articles are presented, ranging from observational field studies to experimental work in highly complex navigation simulators. For the human operator, the processes of attention play a crucial role, which are captured in the contributions listed in this thematic issue by eye-tracking devices. For many decades, eye tracking during car driving has been investigated extensively (e.g. 6; 5). In the present special issue, Cvahte Ojsteršek & Topolšek (4) provide a literature review and scientometric analysis of 139 eye-tracking studies investigating driver distraction. For future studies, the authors recommend a wider variety of distractor stimuli, a larger number of tested participants, and an increasing interdisciplinarity of researchers. In addition to most studies investigating bottom-up processes of covered attention, Tuhkanen, Pekkanen, Lehtonen & Lappi (10) include the experimental control of top-down processes of overt attention in an active visuomotor steering task. The results indicate a bottom-up process of biasing the optic flow of the stimulus input in interaction with the top-down saccade planning induced by the steering task. An expanding area of technological development involves autonomous driving where actions of the human operator directly interact with the programmed reactions of the vehicle. Autonomous driving requires, however, a broader exploration of the entire visual input and less gaze directed towards the road centre. Schnebelen, Charron & Mars (9) conducted experimental research in this area and concluded that gaze dynamics played the most important role in distinguishing between manual and automated driving. Through a combination of advanced gaze tracking systems with the latest vehicle environment sensors, Bickerdt, Wendland, Geisler, Sonnenberg & Kasneci (2021) conducted a study with 50 participants in a driving simulator and propose a novel way to determine perceptual limits which are applicable to realistic driving scenarios. Eye-Computer-Interaction (ECI) is an interactive method of directly controlling a technological device by means of ocular parameters. In this context, Niu, Gao, Xue, Zhang & Yang (8) conducted two experiments to explore the optimum target size and gaze-triggering dwell time in ECI. Their results have an exemplary application value for future interface design. Aircraft training and pilot selection is commonly performed on simulators. This makes it possible to study human capabilities and their limitation in interaction with the simulated technological system. Based on their methodological developments and experimental results, Vlačić, Knežević, Mandal, Rođenkov & Vitsas (11) propose a network approach w
他们的共同目标是,在数字时代,根据人类的能力和局限性,提高技术的潜力和安全性。
{"title":"Eye movements in real and simulated driving and navigation control - Foreword to the Special Issue.","authors":"Rudolf Groner, Enkelejda Kasneci","doi":"10.16910/jemr.12.3.0","DOIUrl":"10.16910/jemr.12.3.0","url":null,"abstract":"&lt;p&gt;&lt;p&gt;The control of technological systems by human operators has been the object of study for many decades. The increasing complexity in the digital age has made the optimization of the interaction between system and human operator particularly necessary. In the present thematic issue, ten exemplary articles are presented, ranging from observational field studies to experimental work in highly complex navigation simulators. For the human operator, the processes of attention play a crucial role, which are captured in the contributions listed in this thematic issue by eye-tracking devices. For many decades, eye tracking during car driving has been investigated extensively (e.g. 6; 5). In the present special issue, Cvahte Ojsteršek & Topolšek (4) provide a literature review and scientometric analysis of 139 eye-tracking studies investigating driver distraction. For future studies, the authors recommend a wider variety of distractor stimuli, a larger number of tested participants, and an increasing interdisciplinarity of researchers. In addition to most studies investigating bottom-up processes of covered attention, Tuhkanen, Pekkanen, Lehtonen & Lappi (10) include the experimental control of top-down processes of overt attention in an active visuomotor steering task. The results indicate a bottom-up process of biasing the optic flow of the stimulus input in interaction with the top-down saccade planning induced by the steering task. An expanding area of technological development involves autonomous driving where actions of the human operator directly interact with the programmed reactions of the vehicle. Autonomous driving requires, however, a broader exploration of the entire visual input and less gaze directed towards the road centre. Schnebelen, Charron & Mars (9) conducted experimental research in this area and concluded that gaze dynamics played the most important role in distinguishing between manual and automated driving. Through a combination of advanced gaze tracking systems with the latest vehicle environment sensors, Bickerdt, Wendland, Geisler, Sonnenberg & Kasneci (2021) conducted a study with 50 participants in a driving simulator and propose a novel way to determine perceptual limits which are applicable to realistic driving scenarios. Eye-Computer-Interaction (ECI) is an interactive method of directly controlling a technological device by means of ocular parameters. In this context, Niu, Gao, Xue, Zhang & Yang (8) conducted two experiments to explore the optimum target size and gaze-triggering dwell time in ECI. Their results have an exemplary application value for future interface design. Aircraft training and pilot selection is commonly performed on simulators. This makes it possible to study human capabilities and their limitation in interaction with the simulated technological system. Based on their methodological developments and experimental results, Vlačić, Knežević, Mandal, Rođenkov & Vitsas (11) propose a network approach w","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"12 3","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8182438/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Manipulating Interword and Interletter Spacing in Cursive Script: An Eye Movements Investigation of Reading Persian. 草书行间和字母间距的操纵:阅读波斯语的眼动研究。
IF 2.1 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2021-05-31 DOI: 10.16910/jemr.14.1.6
Ehab W Hermena

Persian is an Indo-Iranian language that features a derivation of Arabic cursive script, where most letters within words are connectable to adjacent letters with ligatures. Two experiments are reported where the properties of Persian script were utilized to investigate the effects of reducing interword spacing and increasing the interletter distance (ligature) within a word. Experiment 1 revealed that decreasing interword spacing while extending interletter ligature by the same amount was detrimental to reading speed. Experiment 2 largely replicated these findings. The experiments show that providing the readers with inaccurate word boundary information is detrimental to reading rate. This was achieved by reducing the interword space that follows letters that do not connect to the next letter in Experiment 1, and replacing the interword space with ligature that connected the words in Experiment 2. In both experiments, readers were able to comprehend the text read, despite the considerable costs to reading rates in the experimental conditions.

波斯语是一种印度-伊朗语言,其特点是阿拉伯草书的衍生,其中单词中的大多数字母都与相邻的字母连接在一起。本文报道了两个实验,利用波斯语文字的特性来研究减少词间间距和增加词间距离(连字)的效果。实验1显示,减少字间距,同时延长字间距,对阅读速度不利。实验2在很大程度上重复了这些发现。实验表明,为读者提供不准确的词边界信息不利于提高阅读速度。这是通过减少实验1中不与下一个字母连接的字母后面的中间词空间,并用连接实验2中单词的结语替换中间词空间来实现的。在这两个实验中,读者都能够理解所读的文章,尽管在实验条件下,阅读速度要付出相当大的代价。
{"title":"Manipulating Interword and Interletter Spacing in Cursive Script: An Eye Movements Investigation of Reading Persian.","authors":"Ehab W Hermena","doi":"10.16910/jemr.14.1.6","DOIUrl":"https://doi.org/10.16910/jemr.14.1.6","url":null,"abstract":"<p><p>Persian is an Indo-Iranian language that features a derivation of Arabic cursive script, where most letters within words are connectable to adjacent letters with ligatures. Two experiments are reported where the properties of Persian script were utilized to investigate the effects of reducing interword spacing and increasing the interletter distance (ligature) within a word. Experiment 1 revealed that decreasing interword spacing while extending interletter ligature by the same amount was detrimental to reading speed. Experiment 2 largely replicated these findings. The experiments show that providing the readers with inaccurate word boundary information is detrimental to reading rate. This was achieved by reducing the interword space that follows letters that do not connect to the next letter in Experiment 1, and replacing the interword space with ligature that connected the words in Experiment 2. In both experiments, readers were able to comprehend the text read, despite the considerable costs to reading rates in the experimental conditions.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8189716/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Gaze aversion in conversational settings: An investigation based on mock job interview. 对话情境中的凝视厌恶:基于模拟求职面试的调查。
IF 2.1 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2021-05-19 DOI: 10.16910/jemr.14.1.1
Cengiz Acarturk, Bipin Indurkya, Piotr Nawrocki, Bartlomiej Sniezynski, Mateusz Jarosz, Kerem Alp Usal

We report the results of an empirical study on gaze aversion during dyadic human-to-human conversation in an interview setting. To address various methodological challenges in assessing gaze-to-face contact, we followed an approach where the experiment was conducted twice, each time with a different set of interviewees. In one of them the interviewer's gaze was tracked with an eye tracker, and in the other the interviewee's gaze was tracked. The gaze sequences obtained in both experiments were analyzed and modeled as Discrete-Time Markov Chains. The results show that the interviewer made more frequent and longer gaze contacts compared to the interviewee. Also, the interviewer made mostly diagonal gaze aversions, whereas the interviewee made sideways aversions (left or right). We discuss the relevance of this research for Human-Robot Interaction, and discuss some future research problems.

我们报告了一项实证研究的结果,凝视厌恶在面试设置的二元人对人的谈话。为了解决评估面对面接触的各种方法上的挑战,我们采用了一种方法,实验进行了两次,每次都有不同的受访者。其中一组用眼动仪追踪面试官的目光,另一组则追踪面试者的目光。对两个实验中得到的凝视序列进行分析,并建立离散时间马尔可夫链模型。结果表明,与被调查者相比,面试官的凝视接触频率更高,时间更长。此外,面试官大多是对角的厌恶,而受访者则是侧向的厌恶(左或右)。我们讨论了本研究与人机交互的相关性,并讨论了未来的研究问题。
{"title":"Gaze aversion in conversational settings: An investigation based on mock job interview.","authors":"Cengiz Acarturk,&nbsp;Bipin Indurkya,&nbsp;Piotr Nawrocki,&nbsp;Bartlomiej Sniezynski,&nbsp;Mateusz Jarosz,&nbsp;Kerem Alp Usal","doi":"10.16910/jemr.14.1.1","DOIUrl":"https://doi.org/10.16910/jemr.14.1.1","url":null,"abstract":"<p><p>We report the results of an empirical study on gaze aversion during dyadic human-to-human conversation in an interview setting. To address various methodological challenges in assessing gaze-to-face contact, we followed an approach where the experiment was conducted twice, each time with a different set of interviewees. In one of them the interviewer's gaze was tracked with an eye tracker, and in the other the interviewee's gaze was tracked. The gaze sequences obtained in both experiments were analyzed and modeled as Discrete-Time Markov Chains. The results show that the interviewer made more frequent and longer gaze contacts compared to the interviewee. Also, the interviewer made mostly diagonal gaze aversions, whereas the interviewee made sideways aversions (left or right). We discuss the relevance of this research for Human-Robot Interaction, and discuss some future research problems.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8188832/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Object-Gaze Distance: Quantifying Near- Peripheral Gaze Behavior in Real-World Applications. 对象凝视距离:量化近周边凝视行为在现实世界的应用。
IF 2.1 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2021-05-19 DOI: 10.16910/jemr.14.1.5
Felix S Wang, Julian Wolf, Mazda Farshad, Mirko Meboldt, Quentin Lohmeyer

Eye tracking (ET) has shown to reveal the wearer's cognitive processes using the measurement of the central point of foveal vision. However, traditional ET evaluation methods have not been able to take into account the wearers' use of the peripheral field of vision. We propose an algorithmic enhancement to a state-of-the-art ET analysis method, the Object- Gaze Distance (OGD), which additionally allows the quantification of near-peripheral gaze behavior in complex real-world environments. The algorithm uses machine learning for area of interest (AOI) detection and computes the minimal 2D Euclidean pixel distance to the gaze point, creating a continuous gaze-based time-series. Based on an evaluation of two AOIs in a real surgical procedure, the results show that a considerable increase of interpretable fixation data from 23.8 % to 78.3 % of AOI screw and from 4.5 % to 67.2 % of AOI screwdriver was achieved, when incorporating the near-peripheral field of vision. Additionally, the evaluation of a multi-OGD time series representation has shown the potential to reveal novel gaze patterns, which may provide a more accurate depiction of human gaze behavior in multi-object environments.

眼动追踪(ET)通过测量中央凹视觉中心点来揭示佩戴者的认知过程。然而,传统的ET评估方法未能考虑到佩戴者对周边视野的使用情况。我们提出了一种对最先进的ET分析方法的算法增强,即对象-凝视距离(OGD),该方法还允许在复杂的现实世界环境中量化近周边凝视行为。该算法使用机器学习进行感兴趣区域(AOI)检测,并计算到凝视点的最小二维欧几里德像素距离,从而创建一个连续的基于凝视的时间序列。基于对两个AOI的实际手术过程的评估,结果表明,当合并近周围视野时,AOI螺钉的可解释固定数据从23.8%增加到78.3%,AOI螺丝刀的可解释固定数据从4.5%增加到67.2%。此外,对多ogd时间序列表示的评估显示出揭示新的凝视模式的潜力,这可能提供对多目标环境中人类凝视行为的更准确描述。
{"title":"Object-Gaze Distance: Quantifying Near- Peripheral Gaze Behavior in Real-World Applications.","authors":"Felix S Wang,&nbsp;Julian Wolf,&nbsp;Mazda Farshad,&nbsp;Mirko Meboldt,&nbsp;Quentin Lohmeyer","doi":"10.16910/jemr.14.1.5","DOIUrl":"https://doi.org/10.16910/jemr.14.1.5","url":null,"abstract":"<p><p>Eye tracking (ET) has shown to reveal the wearer's cognitive processes using the measurement of the central point of foveal vision. However, traditional ET evaluation methods have not been able to take into account the wearers' use of the peripheral field of vision. We propose an algorithmic enhancement to a state-of-the-art ET analysis method, the Object- Gaze Distance (OGD), which additionally allows the quantification of near-peripheral gaze behavior in complex real-world environments. The algorithm uses machine learning for area of interest (AOI) detection and computes the minimal 2D Euclidean pixel distance to the gaze point, creating a continuous gaze-based time-series. Based on an evaluation of two AOIs in a real surgical procedure, the results show that a considerable increase of interpretable fixation data from 23.8 % to 78.3 % of AOI screw and from 4.5 % to 67.2 % of AOI screwdriver was achieved, when incorporating the near-peripheral field of vision. Additionally, the evaluation of a multi-OGD time series representation has shown the potential to reveal novel gaze patterns, which may provide a more accurate depiction of human gaze behavior in multi-object environments.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8189527/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A low-cost, high-performance video-based binocular eye tracker for psychophysical research. 用于心理物理研究的低成本,高性能视频双目眼动仪。
IF 2.1 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2021-05-05 DOI: 10.16910/jemr.14.3.3
Daria Ivanchenko, Katharina Rifai, Ziad M Hafed, Frank Schaeffel

We describe a high-performance, pupil-based binocular eye tracker that approaches the performance of a well-established commercial system, but at a fraction of the cost. The eye tracker is built from standard hardware components, and its software (written in Visual C++) can be easily implemented. Because of its fast and simple linear calibration scheme, the eye tracker performs best in the central 10 degrees of the visual field. The eye tracker possesses a number of useful features: (1) automated calibration simultaneously in both eyes while subjects fixate four fixation points sequentially on a computer screen, (2) automated realtime continuous analysis of measurement noise, (3) automated blink detection, (4) and realtime analysis of pupil centration artifacts. This last feature is critical because it is known that pupil diameter changes can be erroneously registered by pupil-based trackers as a change in eye position. We evaluated the performance of our system against that of a wellestablished commercial system using simultaneous measurements in 10 participants. We propose our low-cost eye tracker as a promising resource for studies of binocular eye movements.

我们描述了一种高性能,基于瞳孔的双目眼动仪,其性能接近成熟的商业系统,但成本只是一小部分。眼动仪由标准硬件组件构建,其软件(用Visual c++编写)可以很容易地实现。由于其快速、简单的线性校准方案,眼动仪在视野中心10度处表现最佳。眼动仪具有许多有用的功能:(1)当受试者在计算机屏幕上依次注视四个注视点时,双眼同时自动校准;(2)自动实时连续分析测量噪声;(3)自动眨眼检测;(4)实时分析瞳孔集中伪影。最后一个特征是至关重要的,因为我们知道瞳孔直径的变化可能会被瞳孔跟踪器错误地记录为眼睛位置的变化。我们通过同时测量10个参与者来评估我们的系统与一个完善的商业系统的性能。我们提出我们的低成本眼动仪作为一个有前途的资源,研究双眼眼球运动。
{"title":"A low-cost, high-performance video-based binocular eye tracker for psychophysical research.","authors":"Daria Ivanchenko,&nbsp;Katharina Rifai,&nbsp;Ziad M Hafed,&nbsp;Frank Schaeffel","doi":"10.16910/jemr.14.3.3","DOIUrl":"https://doi.org/10.16910/jemr.14.3.3","url":null,"abstract":"<p><p>We describe a high-performance, pupil-based binocular eye tracker that approaches the performance of a well-established commercial system, but at a fraction of the cost. The eye tracker is built from standard hardware components, and its software (written in Visual C++) can be easily implemented. Because of its fast and simple linear calibration scheme, the eye tracker performs best in the central 10 degrees of the visual field. The eye tracker possesses a number of useful features: (1) automated calibration simultaneously in both eyes while subjects fixate four fixation points sequentially on a computer screen, (2) automated realtime continuous analysis of measurement noise, (3) automated blink detection, (4) and realtime analysis of pupil centration artifacts. This last feature is critical because it is known that pupil diameter changes can be erroneously registered by pupil-based trackers as a change in eye position. We evaluated the performance of our system against that of a wellestablished commercial system using simultaneous measurements in 10 participants. We propose our low-cost eye tracker as a promising resource for studies of binocular eye movements.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 3","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8190563/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39010218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
The interplay between task difficulty and microsaccade rate: Evidence for the critical role of visual load. 任务难度与微跳率之间的相互作用:视觉负荷关键作用的证据。
IF 2.1 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2021-04-28 DOI: 10.16910/jemr.13.5.6
Andrea Schneider, Andreas Sonderegger, Eva Krueger, Quentin Meteier, Patrick Luethold, Alain Chavaillaz

In previous research, microsaccades have been suggested as psychophysiological indicators of task load. So far, it is still under debate how different types of task demands are influencing microsaccade rate. This piece of research examines the relation between visual load, mental load and microsaccade rate. Fourteen participants carried out a continuous performance task (n-back), in which visual (letters vs. abstract figures) and mental task load (1-back to 4-back) were manipulated as within-subjects variables. Eye tracking data, performance data as well as subjective workload were recorded. Data analysis revealed an increased level of microsaccade rate for stimuli of high visual demand (i.e. abstract figures), while mental demand (n-back-level) did not modulate microsaccade rate. In conclusion, the present results suggest that microsaccade rate reflects visual load of a task rather than its mental load.

在以往的研究中,微跳被认为是任务负荷的心理生理指标。到目前为止,不同类型的任务需求是如何影响微扫视率的还存在争议。本研究探讨了视觉负荷、心理负荷与微跳频之间的关系。14名参与者进行了连续表现任务(n-back),其中视觉(字母vs.抽象图形)和心理任务负荷(1-back到4-back)被操纵为受试者内部变量。记录眼球追踪数据、工作表现数据以及主观工作量。数据分析表明,高视觉需求刺激(如抽象图形)会增加微跳速,而心理需求(n-back水平)对微跳速没有调节作用。综上所述,微跳频反映的是任务的视觉负荷而非心理负荷。
{"title":"The interplay between task difficulty and microsaccade rate: Evidence for the critical role of visual load.","authors":"Andrea Schneider, Andreas Sonderegger, Eva Krueger, Quentin Meteier, Patrick Luethold, Alain Chavaillaz","doi":"10.16910/jemr.13.5.6","DOIUrl":"10.16910/jemr.13.5.6","url":null,"abstract":"<p><p>In previous research, microsaccades have been suggested as psychophysiological indicators of task load. So far, it is still under debate how different types of task demands are influencing microsaccade rate. This piece of research examines the relation between visual load, mental load and microsaccade rate. Fourteen participants carried out a continuous performance task (n-back), in which visual (letters vs. abstract figures) and mental task load (1-back to 4-back) were manipulated as within-subjects variables. Eye tracking data, performance data as well as subjective workload were recorded. Data analysis revealed an increased level of microsaccade rate for stimuli of high visual demand (i.e. abstract figures), while mental demand (n-back-level) did not modulate microsaccade rate. In conclusion, the present results suggest that microsaccade rate reflects visual load of a task rather than its mental load.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"13 5","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8188521/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Beyond the tracked line of sight - Evaluation of the peripheral usable field of view in a simulator setting. 跟踪视线之外。模拟器设置中外围可用视场的评估
IF 2.1 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2021-04-26 DOI: 10.16910/jemr.12.3.9
Jan Bickerdt, Hannes Wendland, David Geisler, Jan Sonnenberg, Enkelejda Kasneci

Combining advanced gaze tracking systems with the latest vehicle environment sensors opens up new fields of applications for driver assistance. Gaze tracking enables researchers to determine the location of a fixation, and under consideration of the visual saliency of the scene, to predict visual perception of objects. The perceptual limits, for stimulus identification, found in literature have mostly been determined in laboratory conditions using isolated stimuli, with a fixed gaze point, on a single screen with limited coverage of the field of view. The found limits are usually reported as hard limits. Such commonly used limits are therefore not applicable to settings with a wide field of view, natural viewing behavior and multi-stimuli. As handling of sudden, potentially critical driving maneuvers heavily relies on peripheral vision, the peripheral limits for feature perception need to be included in the determined perceptual limits. To analyze the human visual perception of different, simultaneously occurring, object changes (shape, color, movement) we conducted a study with 50 participants, in a driving simulator and we propose a novel way to determine perceptual limits, which is more applicable to driving scenarios.

将先进的注视跟踪系统与最新的车辆环境传感器相结合,为驾驶员辅助开辟了新的应用领域。注视跟踪使研究人员能够确定注视的位置,并在考虑场景的视觉显著性的情况下,预测物体的视觉感知。在文献中发现的刺激识别的感知限制大多是在实验室条件下确定的,使用孤立的刺激,固定的凝视点,在单一屏幕上,视野范围有限。发现的限制通常被报告为硬限制。因此,这种常用的限制不适用于具有宽视场,自然观看行为和多刺激的设置。由于处理突发的、潜在的关键驾驶动作严重依赖于周边视觉,因此需要将特征感知的周边极限包括在确定的感知极限中。为了分析人类对不同的、同时发生的物体变化(形状、颜色、运动)的视觉感知,我们在驾驶模拟器中对50名参与者进行了一项研究,并提出了一种新的方法来确定感知极限,该方法更适用于驾驶场景。
{"title":"Beyond the tracked line of sight - Evaluation of the peripheral usable field of view in a simulator setting.","authors":"Jan Bickerdt,&nbsp;Hannes Wendland,&nbsp;David Geisler,&nbsp;Jan Sonnenberg,&nbsp;Enkelejda Kasneci","doi":"10.16910/jemr.12.3.9","DOIUrl":"https://doi.org/10.16910/jemr.12.3.9","url":null,"abstract":"<p><p>Combining advanced gaze tracking systems with the latest vehicle environment sensors opens up new fields of applications for driver assistance. Gaze tracking enables researchers to determine the location of a fixation, and under consideration of the visual saliency of the scene, to predict visual perception of objects. The perceptual limits, for stimulus identification, found in literature have mostly been determined in laboratory conditions using isolated stimuli, with a fixed gaze point, on a single screen with limited coverage of the field of view. The found limits are usually reported as hard limits. Such commonly used limits are therefore not applicable to settings with a wide field of view, natural viewing behavior and multi-stimuli. As handling of sudden, potentially critical driving maneuvers heavily relies on peripheral vision, the peripheral limits for feature perception need to be included in the determined perceptual limits. To analyze the human visual perception of different, simultaneously occurring, object changes (shape, color, movement) we conducted a study with 50 participants, in a driving simulator and we propose a novel way to determine perceptual limits, which is more applicable to driving scenarios.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"12 3","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8183303/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Interaction between image and text during the process of biblical art reception. 圣经艺术接受过程中图像与文本的互动。
IF 2.1 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2021-03-12 DOI: 10.16910/jemr.13.2.14
Gregor Hardiess, Caecilie Weissert

In our exploratory study, we ask how naive observers, without a distinct religious background, approach biblical art that combines image and text. For this purpose, we choose the book 'New biblical figures of the Old and New Testament' published in 1569 as source of the stimuli. This book belongs to the genre of illustrated Bibles, which were very popular during the Reformation. Since there is no empirical knowledge regarding the interaction between image and text during the process of such biblical art reception, we selected four relevant images from the book and measured the eye movements of participants in order to characterize and quantify their scanning behavior related to such stimuli in terms of i) looking at text (text usage), ii) text vs. image interaction measures (semantic or contextual relevance of text), and iii) narration. We show that texts capture attention early in the process of inspection and that text and image interact. Moreover, semantics of texts are used to guide eye movements later through the image, supporting the formation of the narrative.

在我们的探索性研究中,我们询问没有独特宗教背景的天真观察者如何接近结合了图像和文本的圣经艺术。出于这个目的,我们选择了1569年出版的《新旧约圣经人物》一书作为刺激的来源。这本书属于在宗教改革时期非常流行的插图圣经类型。由于在这种圣经艺术接受过程中没有关于图像和文本之间相互作用的经验知识,我们从书中选择了四个相关图像,并测量了参与者的眼球运动,以便在以下方面描述和量化他们与这些刺激相关的扫描行为:1)看文本(文本使用),ii)文本与图像交互措施(文本的语义或上下文相关性),以及iii)叙述。我们表明,文本在检查过程的早期捕获注意力,并且文本和图像相互作用。此外,文本的语义通过图像引导眼球运动,支持叙事的形成。
{"title":"Interaction between image and text during the process of biblical art reception.","authors":"Gregor Hardiess,&nbsp;Caecilie Weissert","doi":"10.16910/jemr.13.2.14","DOIUrl":"https://doi.org/10.16910/jemr.13.2.14","url":null,"abstract":"<p><p>In our exploratory study, we ask how naive observers, without a distinct religious background, approach biblical art that combines image and text. For this purpose, we choose the book 'New biblical figures of the Old and New Testament' published in 1569 as source of the stimuli. This book belongs to the genre of illustrated Bibles, which were very popular during the Reformation. Since there is no empirical knowledge regarding the interaction between image and text during the process of such biblical art reception, we selected four relevant images from the book and measured the eye movements of participants in order to characterize and quantify their scanning behavior related to such stimuli in terms of i) looking at text (text usage), ii) text vs. image interaction measures (semantic or contextual relevance of text), and iii) narration. We show that texts capture attention early in the process of inspection and that text and image interact. Moreover, semantics of texts are used to guide eye movements later through the image, supporting the formation of the narrative.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"13 2","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8019328/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25585594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Developing Expert Gaze Pattern in Laparoscopic Surgery Requires More than Behavioral Training. 在腹腔镜手术中发展专家凝视模式需要的不仅仅是行为训练。
IF 2.1 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2021-03-10 DOI: 10.16910/jemr.14.2.2
Sicong Liu, Rachel Donaldson, Ashwin Subramaniam, Hannah Palmer, Cosette D Champion, Morgan L Cox, L Gregory Appelbaum

Expertise in laparoscopic surgery is realized through both manual dexterity and efficient eye movement patterns, creating opportunities to use gaze information in the educational process. To better understand how expert gaze behaviors are acquired through deliberate practice of technical skills, three surgeons were assessed and five novices were trained and assessed in a 5-visit protocol on the Fundamentals of Laparoscopic Surgery peg transfer task. The task was adjusted to have a fixed action sequence to allow recordings of dwell durations based on pre-defined areas of interest (AOIs). Trained novices were shown to reach more than 98% (M = 98.62%, SD = 1.06%) of their behavioral learning plateaus, leading to equivalent behavioral performance to that of surgeons. Despite this equivalence in behavioral performance, surgeons continued to show significantly shorter dwell durations at visual targets of current actions and longer dwell durations at future steps in the action sequence than trained novices (ps ≤ .03, Cohen's ds > 2). This study demonstrates that, while novices can train to match surgeons on behavioral performance, their gaze pattern is still less efficient than that of surgeons, motivating surgical training programs to involve eye tracking technology in their design and evaluation.

腹腔镜手术的专业知识是通过手的灵巧性和有效的眼球运动模式来实现的,这为在教育过程中使用凝视信息创造了机会。为了更好地了解专家凝视行为是如何通过刻意练习技术技能获得的,我们对3名外科医生和5名新手进行了5次访问的腹腔镜手术钉转移任务基础培训和评估。该任务被调整为具有固定的动作序列,以允许基于预定义的兴趣区域(aoi)记录驻留时间。经过训练的新手达到98%以上(M = 98.62%, SD = 1.06%)的行为学习平台期,其行为表现与外科医生相当。尽管在行为表现上存在这种等效性,但外科医生在当前动作的视觉目标上的停留时间明显短于训练有素的新手(ps≤0.03,Cohen’s ds > 2),而在动作序列的未来步骤上的停留时间则明显长于训练有素的新手(ps≤0.03,Cohen’s ds > 2)。本研究表明,尽管训练有素的新手可以在行为表现上与外科医生相匹配,但他们的注视模式仍然不如外科医生有效。激励外科培训项目在设计和评估中使用眼动追踪技术。
{"title":"Developing Expert Gaze Pattern in Laparoscopic Surgery Requires More than Behavioral Training.","authors":"Sicong Liu,&nbsp;Rachel Donaldson,&nbsp;Ashwin Subramaniam,&nbsp;Hannah Palmer,&nbsp;Cosette D Champion,&nbsp;Morgan L Cox,&nbsp;L Gregory Appelbaum","doi":"10.16910/jemr.14.2.2","DOIUrl":"https://doi.org/10.16910/jemr.14.2.2","url":null,"abstract":"<p><p>Expertise in laparoscopic surgery is realized through both manual dexterity and efficient eye movement patterns, creating opportunities to use gaze information in the educational process. To better understand how expert gaze behaviors are acquired through deliberate practice of technical skills, three surgeons were assessed and five novices were trained and assessed in a 5-visit protocol on the Fundamentals of Laparoscopic Surgery peg transfer task. The task was adjusted to have a fixed action sequence to allow recordings of dwell durations based on pre-defined areas of interest (AOIs). Trained novices were shown to reach more than 98% (M = 98.62%, SD = 1.06%) of their behavioral learning plateaus, leading to equivalent behavioral performance to that of surgeons. Despite this equivalence in behavioral performance, surgeons continued to show significantly shorter dwell durations at visual targets of current actions and longer dwell durations at future steps in the action sequence than trained novices (ps ≤ .03, Cohen's ds > 2). This study demonstrates that, while novices can train to match surgeons on behavioral performance, their gaze pattern is still less efficient than that of surgeons, motivating surgical training programs to involve eye tracking technology in their design and evaluation.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 2","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8019143/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25568834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
Journal of Eye Movement Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1