首页 > 最新文献

Journal of Eye Movement Research最新文献

英文 中文
Detecting Task Difficulty of Learners in Colonoscopy: Evidence from Eye-Tracking. 结肠镜检查中学习者任务难度的检测:来自眼动追踪的证据。
IF 2.1 4区 心理学 Q2 Medicine Pub Date : 2021-07-13 eCollection Date: 2021-01-01 DOI: 10.16910/jemr.14.2.5
Liu Xin, Zheng Bin, Duan Xiaoqin, He Wenjing, Li Yuandong, Zhao Jinyu, Zhao Chen, Wang Lin

Eye-tracking can help decode the intricate control mechanism in human performance. In healthcare, physicians-in-training require extensive practice to improve their healthcare skills. When a trainee encounters any difficulty in the practice, they will need feedback from experts to improve their performance. Personal feedback is time-consuming and subjected to bias. In this study, we tracked the eye movements of trainees during their colonoscopic performance in simulation. We examined changes in eye movement behavior during the moments of navigation loss (MNL), a signature sign for task difficulty during colonoscopy, and tested whether deep learning algorithms can detect the MNL by feeding data from eye-tracking. Human eye gaze and pupil characteristics were learned and verified by the deep convolutional generative adversarial networks (DCGANs); the generated data were fed to the Long Short-Term Memory (LSTM) networks with three different data feeding strategies to classify MNLs from the entire colonoscopic procedure. Outputs from deep learning were compared to the expert's judgment on the MNLs based on colonoscopic videos. The best classification outcome was achieved when we fed human eye data with 1000 synthesized eye data, where accuracy (91.80%), sensitivity (90.91%), and specificity (94.12%) were optimized. This study built an important foundation for our work of developing an education system for training healthcare skills using simulation.

眼球追踪可以帮助解读人类行为中复杂的控制机制。在医疗保健领域,实习医生需要广泛的实践来提高他们的医疗技能。当学员在练习中遇到困难时,他们需要专家的反馈来提高他们的表现。个人反馈既费时又容易受到偏见的影响。在这项研究中,我们在模拟中跟踪了受训人员在结肠镜检查过程中的眼球运动。我们研究了结肠镜检查过程中导航丢失时刻(MNL)的眼动行为变化,并测试了深度学习算法是否可以通过输入眼动追踪数据来检测MNL。采用深度卷积生成对抗网络(dcgan)学习并验证人眼凝视和瞳孔特征;将生成的数据以三种不同的数据馈送策略馈送到长短期记忆(LSTM)网络,以对整个结肠镜检查过程中的mnl进行分类。将深度学习的输出与专家基于结肠镜视频对mnl的判断进行比较。用1000张人眼合成数据进行分类,准确率(91.80%)、灵敏度(90.91%)、特异度(94.12%)达到最佳。本研究为我们开发模拟医疗技能培训教育系统的工作奠定了重要基础。
{"title":"Detecting Task Difficulty of Learners in Colonoscopy: Evidence from Eye-Tracking.","authors":"Liu Xin,&nbsp;Zheng Bin,&nbsp;Duan Xiaoqin,&nbsp;He Wenjing,&nbsp;Li Yuandong,&nbsp;Zhao Jinyu,&nbsp;Zhao Chen,&nbsp;Wang Lin","doi":"10.16910/jemr.14.2.5","DOIUrl":"https://doi.org/10.16910/jemr.14.2.5","url":null,"abstract":"<p><p>Eye-tracking can help decode the intricate control mechanism in human performance. In healthcare, physicians-in-training require extensive practice to improve their healthcare skills. When a trainee encounters any difficulty in the practice, they will need feedback from experts to improve their performance. Personal feedback is time-consuming and subjected to bias. In this study, we tracked the eye movements of trainees during their colonoscopic performance in simulation. We examined changes in eye movement behavior during the moments of navigation loss (MNL), a signature sign for task difficulty during colonoscopy, and tested whether deep learning algorithms can detect the MNL by feeding data from eye-tracking. Human eye gaze and pupil characteristics were learned and verified by the deep convolutional generative adversarial networks (DCGANs); the generated data were fed to the Long Short-Term Memory (LSTM) networks with three different data feeding strategies to classify MNLs from the entire colonoscopic procedure. Outputs from deep learning were compared to the expert's judgment on the MNLs based on colonoscopic videos. The best classification outcome was achieved when we fed human eye data with 1000 synthesized eye data, where accuracy (91.80%), sensitivity (90.91%), and specificity (94.12%) were optimized. This study built an important foundation for our work of developing an education system for training healthcare skills using simulation.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8327395/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39273168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Vergence Fusion Sustaining Oscillations. 收敛融合持续振荡。
IF 2.1 4区 心理学 Q2 Medicine Pub Date : 2021-06-28 DOI: 10.16910/jemr.14.1.4
John Semmlow, Chang Yaramothu, Mitchell Scheiman, Tara L Alvarez

Introduction: Previous studies have shown that the slow, or fusion sustaining, component of disparity vergence contains oscillatory behavior as would be expected if fusion is sustained by visual feedback. This study extends the examination of this behavior to a wider range of frequencies and a larger number of subjects.

Methods: Disparity vergence responses to symmetrical 4.0 deg step changes in target position were recorded in 20 subjects. Approximately three seconds of the late component of each response were isolated using interactive graphics and the frequency spectrum calculated. Peaks in these spectra associated with oscillatory behavior were identified and examined.

Results: All subjects exhibited oscillatory behavior with fundamental frequencies ranging between 0.37 and 0.55 Hz; much lower than those identified in the earlier study. All responses showed significant higher frequency components. The relationship between higher frequency components and the fundamental frequency suggest may be harmonics. A correlation was found across subjects between the amplitude of the fundamental frequency and the maximum velocity of the fusion initiating component probably due to the gain of shared neural pathways.

Conclusion: Low frequency oscillatory behavior was found in all subjects adding support that the slow, or fusion sustaining, component is mediated by a feedback control.

先前的研究表明,视差收敛的缓慢或持续融合成分包含振荡行为,如果融合是由视觉反馈维持的。这项研究将对这种行为的研究扩展到更广泛的频率范围和更多的受试者。方法:记录20例受试者对目标位置对称4.0度阶跃变化的视差收敛响应。使用交互式图形和计算频谱将每个响应的延迟分量的大约三秒隔离出来。这些与振荡行为相关的光谱中的峰被识别和检查。结果:所有受试者均表现出基频在0.37 ~ 0.55 Hz之间的振荡行为;比之前研究中发现的要低得多。所有的响应都显示出显著的高频成分。高频分量与基频之间的关系表明可能是谐波。在被试中发现基频的振幅和融合起始分量的最大速度之间存在相关性,这可能是由于共享神经通路的增加。结论:所有受试者都存在低频振荡行为,这进一步支持了慢速或融合维持成分是由反馈控制介导的。
{"title":"Vergence Fusion Sustaining Oscillations.","authors":"John Semmlow,&nbsp;Chang Yaramothu,&nbsp;Mitchell Scheiman,&nbsp;Tara L Alvarez","doi":"10.16910/jemr.14.1.4","DOIUrl":"https://doi.org/10.16910/jemr.14.1.4","url":null,"abstract":"<p><strong>Introduction: </strong>Previous studies have shown that the slow, or fusion sustaining, component of disparity vergence contains oscillatory behavior as would be expected if fusion is sustained by visual feedback. This study extends the examination of this behavior to a wider range of frequencies and a larger number of subjects.</p><p><strong>Methods: </strong>Disparity vergence responses to symmetrical 4.0 deg step changes in target position were recorded in 20 subjects. Approximately three seconds of the late component of each response were isolated using interactive graphics and the frequency spectrum calculated. Peaks in these spectra associated with oscillatory behavior were identified and examined.</p><p><strong>Results: </strong>All subjects exhibited oscillatory behavior with fundamental frequencies ranging between 0.37 and 0.55 Hz; much lower than those identified in the earlier study. All responses showed significant higher frequency components. The relationship between higher frequency components and the fundamental frequency suggest may be harmonics. A correlation was found across subjects between the amplitude of the fundamental frequency and the maximum velocity of the fusion initiating component probably due to the gain of shared neural pathways.</p><p><strong>Conclusion: </strong>Low frequency oscillatory behavior was found in all subjects adding support that the slow, or fusion sustaining, component is mediated by a feedback control.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8247062/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39149463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Optimizing the usage of pupillary based indicators for cognitive workload. 优化瞳孔认知负荷指标的使用。
IF 2.1 4区 心理学 Q2 Medicine Pub Date : 2021-06-11 eCollection Date: 2021-01-01 DOI: 10.16910/jemr.14.2.4
Benedict C O F Fehringer

The Index of Cognitive Activity (ICA) and its open-source alternative, the Index of Pupillary Activity (IPA), are pupillary-based indicators for cognitive workload and are independent of light changes. Both indicators were investigated regarding influences of cognitive demand, fatigue and inter-individual differences. In addition, the variability of pupil changes between both eyes (difference values) were compared with the usually calculated pupillary changes averaged over both eyes (mean values). Fifty-five participants performed a spatial thinking test, the R-Cube-Vis Test, with six distinct difficulty levels and a simple fixation task before and after the R-Cube-Vis Test. The distributions of the ICA and IPA were comparable. The ICA/IPA values were lower during the simple fixation tasks than during the cognitively demanding R-Cube-Vis Test. A fatigue effect was found only for the mean ICA values. The effects of both indicators were larger between difficulty levels of the test when inter-individual differences were controlled using z-standardization. The difference values seemed to control for fatigue and appeared to differentiate better between more demanding cognitive tasks than the mean values. The derived recommendations for the ICA/IPA values are beneficial to gain more insights in individual performance and behavior during, e.g., training and testing scenarios.

认知活动指数(ICA)及其开源替代品,瞳孔活动指数(IPA),是基于瞳孔的认知负荷指标,不受光线变化的影响。研究了两项指标对认知需求、疲劳和个体间差异的影响。此外,将双眼瞳孔变化的变异性(差值)与通常计算的双眼平均瞳孔变化(平均值)进行比较。55名参与者在R-Cube-Vis测试前后分别进行了一项空间思维测试,即R-Cube-Vis测试,有六个不同的难度等级和一个简单的固定任务。国际收支指数和国际收支指数的分布具有可比性。在简单注视任务中,ICA/IPA值低于认知要求高的R-Cube-Vis测试。疲劳效应只存在于ICA平均值上。当使用z-标准化控制个体间差异时,这两个指标在测试难度水平之间的影响更大。差异值似乎控制了疲劳,并且似乎比平均值更好地区分了要求更高的认知任务。关于ICA/IPA值的衍生建议有助于在训练和测试场景中获得更多关于个人表现和行为的见解。
{"title":"Optimizing the usage of pupillary based indicators for cognitive workload.","authors":"Benedict C O F Fehringer","doi":"10.16910/jemr.14.2.4","DOIUrl":"https://doi.org/10.16910/jemr.14.2.4","url":null,"abstract":"<p><p>The Index of Cognitive Activity (ICA) and its open-source alternative, the Index of Pupillary Activity (IPA), are pupillary-based indicators for cognitive workload and are independent of light changes. Both indicators were investigated regarding influences of cognitive demand, fatigue and inter-individual differences. In addition, the variability of pupil changes between both eyes (difference values) were compared with the usually calculated pupillary changes averaged over both eyes (mean values). Fifty-five participants performed a spatial thinking test, the R-Cube-Vis Test, with six distinct difficulty levels and a simple fixation task before and after the R-Cube-Vis Test. The distributions of the ICA and IPA were comparable. The ICA/IPA values were lower during the simple fixation tasks than during the cognitively demanding R-Cube-Vis Test. A fatigue effect was found only for the mean ICA values. The effects of both indicators were larger between difficulty levels of the test when inter-individual differences were controlled using z-standardization. The difference values seemed to control for fatigue and appeared to differentiate better between more demanding cognitive tasks than the mean values. The derived recommendations for the ICA/IPA values are beneficial to gain more insights in individual performance and behavior during, e.g., training and testing scenarios.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8299071/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39220195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Angular Offset Distributions During Fixation Are, More Often Than Not, Multimodal. 固定过程中的角度偏移分布往往是多模态的。
IF 2.1 4区 心理学 Q2 Medicine Pub Date : 2021-06-03 DOI: 10.16910/jemr.14.3.2
Lee Friedman, Dillon Lohr, Timothy Hanson, Oleg V Komogortsev

Typically, the position error of an eye-tracking device is measured as the distance of the eye-position from the target position in two-dimensional space (angular offset). Accuracy is the mean angular offset. The mean is a highly interpretable measure of central tendency if the underlying error distribution is unimodal and normal. However, in the context of an underlying multimodal distribution, the mean is less interpretable. We will present evidence that the majority of such distributions are multimodal. Only 14.7% of fixation angular offset distributions were unimodal, and of these, only 11.5% were normally distributed. (Of the entire dataset, 1.7% were unimodal and normal.) This multimodality is true even if there is only a single, continuous tracking fixation segment per trial. We present several approaches to measure accuracy in the face of multimodality. We also address the role of fixation drift in partially explaining multimodality.

通常情况下,眼动追踪设备的位置误差是根据眼球位置与目标位置在二维空间中的距离(角度偏移)来测量的。精度就是平均角度偏移。如果底层误差分布是单峰正态分布,那么平均值就是一种高度可解释的中心倾向测量值。然而,在底层多模态分布的情况下,平均值的可解释性较差。我们将提出证据,证明大多数此类分布都是多模态分布。只有 14.7% 的固定角偏移分布是单模态分布,其中只有 11.5% 是正态分布。(在整个数据集中,有 1.7% 是单模态和正态分布。)即使每次试验只有一个连续的跟踪固定片段,这种多模态分布也是真实的。我们介绍了几种在多模态情况下测量准确度的方法。我们还探讨了固定漂移在部分解释多模态性中的作用。
{"title":"Angular Offset Distributions During Fixation Are, More Often Than Not, Multimodal.","authors":"Lee Friedman, Dillon Lohr, Timothy Hanson, Oleg V Komogortsev","doi":"10.16910/jemr.14.3.2","DOIUrl":"10.16910/jemr.14.3.2","url":null,"abstract":"<p><p>Typically, the position error of an eye-tracking device is measured as the distance of the eye-position from the target position in two-dimensional space (angular offset). Accuracy is the mean angular offset. The mean is a highly interpretable measure of central tendency if the underlying error distribution is unimodal and normal. However, in the context of an underlying multimodal distribution, the mean is less interpretable. We will present evidence that the majority of such distributions are multimodal. Only 14.7% of fixation angular offset distributions were unimodal, and of these, only 11.5% were normally distributed. (Of the entire dataset, 1.7% were unimodal and normal.) This multimodality is true even if there is only a single, continuous tracking fixation segment per trial. We present several approaches to measure accuracy in the face of multimodality. We also address the role of fixation drift in partially explaining multimodality.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8189800/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39010217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Eye movements in real and simulated driving and navigation control - Foreword to the Special Issue. 真实和模拟驾驶及导航控制中的眼动 - 特刊前言。
IF 2.1 4区 心理学 Q2 Medicine Pub Date : 2021-06-03 DOI: 10.16910/jemr.12.3.0
Rudolf Groner, Enkelejda Kasneci

The control of technological systems by human operators has been the object of study for many decades. The increasing complexity in the digital age has made the optimization of the interaction between system and human operator particularly necessary. In the present thematic issue, ten exemplary articles are presented, ranging from observational field studies to experimental work in highly complex navigation simulators. For the human operator, the processes of attention play a crucial role, which are captured in the contributions listed in this thematic issue by eye-tracking devices. For many decades, eye tracking during car driving has been investigated extensively (e.g. 6; 5). In the present special issue, Cvahte Ojsteršek & Topolšek (4) provide a literature review and scientometric analysis of 139 eye-tracking studies investigating driver distraction. For future studies, the authors recommend a wider variety of distractor stimuli, a larger number of tested participants, and an increasing interdisciplinarity of researchers. In addition to most studies investigating bottom-up processes of covered attention, Tuhkanen, Pekkanen, Lehtonen & Lappi (10) include the experimental control of top-down processes of overt attention in an active visuomotor steering task. The results indicate a bottom-up process of biasing the optic flow of the stimulus input in interaction with the top-down saccade planning induced by the steering task. An expanding area of technological development involves autonomous driving where actions of the human operator directly interact with the programmed reactions of the vehicle. Autonomous driving requires, however, a broader exploration of the entire visual input and less gaze directed towards the road centre. Schnebelen, Charron & Mars (9) conducted experimental research in this area and concluded that gaze dynamics played the most important role in distinguishing between manual and automated driving. Through a combination of advanced gaze tracking systems with the latest vehicle environment sensors, Bickerdt, Wendland, Geisler, Sonnenberg & Kasneci (2021) conducted a study with 50 participants in a driving simulator and propose a novel way to determine perceptual limits which are applicable to realistic driving scenarios. Eye-Computer-Interaction (ECI) is an interactive method of directly controlling a technological device by means of ocular parameters. In this context, Niu, Gao, Xue, Zhang & Yang (8) conducted two experiments to explore the optimum target size and gaze-triggering dwell time in ECI. Their results have an exemplary application value for future interface design. Aircraft training and pilot selection is commonly performed on simulators. This makes it possible to study human capabilities and their limitation in interaction with the simulated technological system. Based on their methodological developments and experimental results, Vlačić, Knežević, Mandal, Rođenkov & Vitsas (11) propose a network approach w

他们的共同目标是,在数字时代,根据人类的能力和局限性,提高技术的潜力和安全性。
{"title":"Eye movements in real and simulated driving and navigation control - Foreword to the Special Issue.","authors":"Rudolf Groner, Enkelejda Kasneci","doi":"10.16910/jemr.12.3.0","DOIUrl":"10.16910/jemr.12.3.0","url":null,"abstract":"<p><p>The control of technological systems by human operators has been the object of study for many decades. The increasing complexity in the digital age has made the optimization of the interaction between system and human operator particularly necessary. In the present thematic issue, ten exemplary articles are presented, ranging from observational field studies to experimental work in highly complex navigation simulators. For the human operator, the processes of attention play a crucial role, which are captured in the contributions listed in this thematic issue by eye-tracking devices. For many decades, eye tracking during car driving has been investigated extensively (e.g. 6; 5). In the present special issue, Cvahte Ojsteršek & Topolšek (4) provide a literature review and scientometric analysis of 139 eye-tracking studies investigating driver distraction. For future studies, the authors recommend a wider variety of distractor stimuli, a larger number of tested participants, and an increasing interdisciplinarity of researchers. In addition to most studies investigating bottom-up processes of covered attention, Tuhkanen, Pekkanen, Lehtonen & Lappi (10) include the experimental control of top-down processes of overt attention in an active visuomotor steering task. The results indicate a bottom-up process of biasing the optic flow of the stimulus input in interaction with the top-down saccade planning induced by the steering task. An expanding area of technological development involves autonomous driving where actions of the human operator directly interact with the programmed reactions of the vehicle. Autonomous driving requires, however, a broader exploration of the entire visual input and less gaze directed towards the road centre. Schnebelen, Charron & Mars (9) conducted experimental research in this area and concluded that gaze dynamics played the most important role in distinguishing between manual and automated driving. Through a combination of advanced gaze tracking systems with the latest vehicle environment sensors, Bickerdt, Wendland, Geisler, Sonnenberg & Kasneci (2021) conducted a study with 50 participants in a driving simulator and propose a novel way to determine perceptual limits which are applicable to realistic driving scenarios. Eye-Computer-Interaction (ECI) is an interactive method of directly controlling a technological device by means of ocular parameters. In this context, Niu, Gao, Xue, Zhang & Yang (8) conducted two experiments to explore the optimum target size and gaze-triggering dwell time in ECI. Their results have an exemplary application value for future interface design. Aircraft training and pilot selection is commonly performed on simulators. This makes it possible to study human capabilities and their limitation in interaction with the simulated technological system. Based on their methodological developments and experimental results, Vlačić, Knežević, Mandal, Rođenkov & Vitsas (11) propose a network approach w","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8182438/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Manipulating Interword and Interletter Spacing in Cursive Script: An Eye Movements Investigation of Reading Persian. 草书行间和字母间距的操纵:阅读波斯语的眼动研究。
IF 2.1 4区 心理学 Q2 Medicine Pub Date : 2021-05-31 DOI: 10.16910/jemr.14.1.6
Ehab W Hermena

Persian is an Indo-Iranian language that features a derivation of Arabic cursive script, where most letters within words are connectable to adjacent letters with ligatures. Two experiments are reported where the properties of Persian script were utilized to investigate the effects of reducing interword spacing and increasing the interletter distance (ligature) within a word. Experiment 1 revealed that decreasing interword spacing while extending interletter ligature by the same amount was detrimental to reading speed. Experiment 2 largely replicated these findings. The experiments show that providing the readers with inaccurate word boundary information is detrimental to reading rate. This was achieved by reducing the interword space that follows letters that do not connect to the next letter in Experiment 1, and replacing the interword space with ligature that connected the words in Experiment 2. In both experiments, readers were able to comprehend the text read, despite the considerable costs to reading rates in the experimental conditions.

波斯语是一种印度-伊朗语言,其特点是阿拉伯草书的衍生,其中单词中的大多数字母都与相邻的字母连接在一起。本文报道了两个实验,利用波斯语文字的特性来研究减少词间间距和增加词间距离(连字)的效果。实验1显示,减少字间距,同时延长字间距,对阅读速度不利。实验2在很大程度上重复了这些发现。实验表明,为读者提供不准确的词边界信息不利于提高阅读速度。这是通过减少实验1中不与下一个字母连接的字母后面的中间词空间,并用连接实验2中单词的结语替换中间词空间来实现的。在这两个实验中,读者都能够理解所读的文章,尽管在实验条件下,阅读速度要付出相当大的代价。
{"title":"Manipulating Interword and Interletter Spacing in Cursive Script: An Eye Movements Investigation of Reading Persian.","authors":"Ehab W Hermena","doi":"10.16910/jemr.14.1.6","DOIUrl":"https://doi.org/10.16910/jemr.14.1.6","url":null,"abstract":"<p><p>Persian is an Indo-Iranian language that features a derivation of Arabic cursive script, where most letters within words are connectable to adjacent letters with ligatures. Two experiments are reported where the properties of Persian script were utilized to investigate the effects of reducing interword spacing and increasing the interletter distance (ligature) within a word. Experiment 1 revealed that decreasing interword spacing while extending interletter ligature by the same amount was detrimental to reading speed. Experiment 2 largely replicated these findings. The experiments show that providing the readers with inaccurate word boundary information is detrimental to reading rate. This was achieved by reducing the interword space that follows letters that do not connect to the next letter in Experiment 1, and replacing the interword space with ligature that connected the words in Experiment 2. In both experiments, readers were able to comprehend the text read, despite the considerable costs to reading rates in the experimental conditions.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8189716/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Gaze aversion in conversational settings: An investigation based on mock job interview. 对话情境中的凝视厌恶:基于模拟求职面试的调查。
IF 2.1 4区 心理学 Q2 Medicine Pub Date : 2021-05-19 DOI: 10.16910/jemr.14.1.1
Cengiz Acarturk, Bipin Indurkya, Piotr Nawrocki, Bartlomiej Sniezynski, Mateusz Jarosz, Kerem Alp Usal

We report the results of an empirical study on gaze aversion during dyadic human-to-human conversation in an interview setting. To address various methodological challenges in assessing gaze-to-face contact, we followed an approach where the experiment was conducted twice, each time with a different set of interviewees. In one of them the interviewer's gaze was tracked with an eye tracker, and in the other the interviewee's gaze was tracked. The gaze sequences obtained in both experiments were analyzed and modeled as Discrete-Time Markov Chains. The results show that the interviewer made more frequent and longer gaze contacts compared to the interviewee. Also, the interviewer made mostly diagonal gaze aversions, whereas the interviewee made sideways aversions (left or right). We discuss the relevance of this research for Human-Robot Interaction, and discuss some future research problems.

我们报告了一项实证研究的结果,凝视厌恶在面试设置的二元人对人的谈话。为了解决评估面对面接触的各种方法上的挑战,我们采用了一种方法,实验进行了两次,每次都有不同的受访者。其中一组用眼动仪追踪面试官的目光,另一组则追踪面试者的目光。对两个实验中得到的凝视序列进行分析,并建立离散时间马尔可夫链模型。结果表明,与被调查者相比,面试官的凝视接触频率更高,时间更长。此外,面试官大多是对角的厌恶,而受访者则是侧向的厌恶(左或右)。我们讨论了本研究与人机交互的相关性,并讨论了未来的研究问题。
{"title":"Gaze aversion in conversational settings: An investigation based on mock job interview.","authors":"Cengiz Acarturk,&nbsp;Bipin Indurkya,&nbsp;Piotr Nawrocki,&nbsp;Bartlomiej Sniezynski,&nbsp;Mateusz Jarosz,&nbsp;Kerem Alp Usal","doi":"10.16910/jemr.14.1.1","DOIUrl":"https://doi.org/10.16910/jemr.14.1.1","url":null,"abstract":"<p><p>We report the results of an empirical study on gaze aversion during dyadic human-to-human conversation in an interview setting. To address various methodological challenges in assessing gaze-to-face contact, we followed an approach where the experiment was conducted twice, each time with a different set of interviewees. In one of them the interviewer's gaze was tracked with an eye tracker, and in the other the interviewee's gaze was tracked. The gaze sequences obtained in both experiments were analyzed and modeled as Discrete-Time Markov Chains. The results show that the interviewer made more frequent and longer gaze contacts compared to the interviewee. Also, the interviewer made mostly diagonal gaze aversions, whereas the interviewee made sideways aversions (left or right). We discuss the relevance of this research for Human-Robot Interaction, and discuss some future research problems.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8188832/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Object-Gaze Distance: Quantifying Near- Peripheral Gaze Behavior in Real-World Applications. 对象凝视距离:量化近周边凝视行为在现实世界的应用。
IF 2.1 4区 心理学 Q2 Medicine Pub Date : 2021-05-19 DOI: 10.16910/jemr.14.1.5
Felix S Wang, Julian Wolf, Mazda Farshad, Mirko Meboldt, Quentin Lohmeyer

Eye tracking (ET) has shown to reveal the wearer's cognitive processes using the measurement of the central point of foveal vision. However, traditional ET evaluation methods have not been able to take into account the wearers' use of the peripheral field of vision. We propose an algorithmic enhancement to a state-of-the-art ET analysis method, the Object- Gaze Distance (OGD), which additionally allows the quantification of near-peripheral gaze behavior in complex real-world environments. The algorithm uses machine learning for area of interest (AOI) detection and computes the minimal 2D Euclidean pixel distance to the gaze point, creating a continuous gaze-based time-series. Based on an evaluation of two AOIs in a real surgical procedure, the results show that a considerable increase of interpretable fixation data from 23.8 % to 78.3 % of AOI screw and from 4.5 % to 67.2 % of AOI screwdriver was achieved, when incorporating the near-peripheral field of vision. Additionally, the evaluation of a multi-OGD time series representation has shown the potential to reveal novel gaze patterns, which may provide a more accurate depiction of human gaze behavior in multi-object environments.

眼动追踪(ET)通过测量中央凹视觉中心点来揭示佩戴者的认知过程。然而,传统的ET评估方法未能考虑到佩戴者对周边视野的使用情况。我们提出了一种对最先进的ET分析方法的算法增强,即对象-凝视距离(OGD),该方法还允许在复杂的现实世界环境中量化近周边凝视行为。该算法使用机器学习进行感兴趣区域(AOI)检测,并计算到凝视点的最小二维欧几里德像素距离,从而创建一个连续的基于凝视的时间序列。基于对两个AOI的实际手术过程的评估,结果表明,当合并近周围视野时,AOI螺钉的可解释固定数据从23.8%增加到78.3%,AOI螺丝刀的可解释固定数据从4.5%增加到67.2%。此外,对多ogd时间序列表示的评估显示出揭示新的凝视模式的潜力,这可能提供对多目标环境中人类凝视行为的更准确描述。
{"title":"Object-Gaze Distance: Quantifying Near- Peripheral Gaze Behavior in Real-World Applications.","authors":"Felix S Wang,&nbsp;Julian Wolf,&nbsp;Mazda Farshad,&nbsp;Mirko Meboldt,&nbsp;Quentin Lohmeyer","doi":"10.16910/jemr.14.1.5","DOIUrl":"https://doi.org/10.16910/jemr.14.1.5","url":null,"abstract":"<p><p>Eye tracking (ET) has shown to reveal the wearer's cognitive processes using the measurement of the central point of foveal vision. However, traditional ET evaluation methods have not been able to take into account the wearers' use of the peripheral field of vision. We propose an algorithmic enhancement to a state-of-the-art ET analysis method, the Object- Gaze Distance (OGD), which additionally allows the quantification of near-peripheral gaze behavior in complex real-world environments. The algorithm uses machine learning for area of interest (AOI) detection and computes the minimal 2D Euclidean pixel distance to the gaze point, creating a continuous gaze-based time-series. Based on an evaluation of two AOIs in a real surgical procedure, the results show that a considerable increase of interpretable fixation data from 23.8 % to 78.3 % of AOI screw and from 4.5 % to 67.2 % of AOI screwdriver was achieved, when incorporating the near-peripheral field of vision. Additionally, the evaluation of a multi-OGD time series representation has shown the potential to reveal novel gaze patterns, which may provide a more accurate depiction of human gaze behavior in multi-object environments.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8189527/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A low-cost, high-performance video-based binocular eye tracker for psychophysical research. 用于心理物理研究的低成本,高性能视频双目眼动仪。
IF 2.1 4区 心理学 Q2 Medicine Pub Date : 2021-05-05 DOI: 10.16910/jemr.14.3.3
Daria Ivanchenko, Katharina Rifai, Ziad M Hafed, Frank Schaeffel

We describe a high-performance, pupil-based binocular eye tracker that approaches the performance of a well-established commercial system, but at a fraction of the cost. The eye tracker is built from standard hardware components, and its software (written in Visual C++) can be easily implemented. Because of its fast and simple linear calibration scheme, the eye tracker performs best in the central 10 degrees of the visual field. The eye tracker possesses a number of useful features: (1) automated calibration simultaneously in both eyes while subjects fixate four fixation points sequentially on a computer screen, (2) automated realtime continuous analysis of measurement noise, (3) automated blink detection, (4) and realtime analysis of pupil centration artifacts. This last feature is critical because it is known that pupil diameter changes can be erroneously registered by pupil-based trackers as a change in eye position. We evaluated the performance of our system against that of a wellestablished commercial system using simultaneous measurements in 10 participants. We propose our low-cost eye tracker as a promising resource for studies of binocular eye movements.

我们描述了一种高性能,基于瞳孔的双目眼动仪,其性能接近成熟的商业系统,但成本只是一小部分。眼动仪由标准硬件组件构建,其软件(用Visual c++编写)可以很容易地实现。由于其快速、简单的线性校准方案,眼动仪在视野中心10度处表现最佳。眼动仪具有许多有用的功能:(1)当受试者在计算机屏幕上依次注视四个注视点时,双眼同时自动校准;(2)自动实时连续分析测量噪声;(3)自动眨眼检测;(4)实时分析瞳孔集中伪影。最后一个特征是至关重要的,因为我们知道瞳孔直径的变化可能会被瞳孔跟踪器错误地记录为眼睛位置的变化。我们通过同时测量10个参与者来评估我们的系统与一个完善的商业系统的性能。我们提出我们的低成本眼动仪作为一个有前途的资源,研究双眼眼球运动。
{"title":"A low-cost, high-performance video-based binocular eye tracker for psychophysical research.","authors":"Daria Ivanchenko,&nbsp;Katharina Rifai,&nbsp;Ziad M Hafed,&nbsp;Frank Schaeffel","doi":"10.16910/jemr.14.3.3","DOIUrl":"https://doi.org/10.16910/jemr.14.3.3","url":null,"abstract":"<p><p>We describe a high-performance, pupil-based binocular eye tracker that approaches the performance of a well-established commercial system, but at a fraction of the cost. The eye tracker is built from standard hardware components, and its software (written in Visual C++) can be easily implemented. Because of its fast and simple linear calibration scheme, the eye tracker performs best in the central 10 degrees of the visual field. The eye tracker possesses a number of useful features: (1) automated calibration simultaneously in both eyes while subjects fixate four fixation points sequentially on a computer screen, (2) automated realtime continuous analysis of measurement noise, (3) automated blink detection, (4) and realtime analysis of pupil centration artifacts. This last feature is critical because it is known that pupil diameter changes can be erroneously registered by pupil-based trackers as a change in eye position. We evaluated the performance of our system against that of a wellestablished commercial system using simultaneous measurements in 10 participants. We propose our low-cost eye tracker as a promising resource for studies of binocular eye movements.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8190563/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39010218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
The interplay between task difficulty and microsaccade rate: Evidence for the critical role of visual load. 任务难度与微跳率之间的相互作用:视觉负荷关键作用的证据。
IF 2.1 4区 心理学 Q2 Medicine Pub Date : 2021-04-28 DOI: 10.16910/jemr.13.5.6
Andrea Schneider, Andreas Sonderegger, Eva Krueger, Quentin Meteier, Patrick Luethold, Alain Chavaillaz

In previous research, microsaccades have been suggested as psychophysiological indicators of task load. So far, it is still under debate how different types of task demands are influencing microsaccade rate. This piece of research examines the relation between visual load, mental load and microsaccade rate. Fourteen participants carried out a continuous performance task (n-back), in which visual (letters vs. abstract figures) and mental task load (1-back to 4-back) were manipulated as within-subjects variables. Eye tracking data, performance data as well as subjective workload were recorded. Data analysis revealed an increased level of microsaccade rate for stimuli of high visual demand (i.e. abstract figures), while mental demand (n-back-level) did not modulate microsaccade rate. In conclusion, the present results suggest that microsaccade rate reflects visual load of a task rather than its mental load.

在以往的研究中,微跳被认为是任务负荷的心理生理指标。到目前为止,不同类型的任务需求是如何影响微扫视率的还存在争议。本研究探讨了视觉负荷、心理负荷与微跳频之间的关系。14名参与者进行了连续表现任务(n-back),其中视觉(字母vs.抽象图形)和心理任务负荷(1-back到4-back)被操纵为受试者内部变量。记录眼球追踪数据、工作表现数据以及主观工作量。数据分析表明,高视觉需求刺激(如抽象图形)会增加微跳速,而心理需求(n-back水平)对微跳速没有调节作用。综上所述,微跳频反映的是任务的视觉负荷而非心理负荷。
{"title":"The interplay between task difficulty and microsaccade rate: Evidence for the critical role of visual load.","authors":"Andrea Schneider, Andreas Sonderegger, Eva Krueger, Quentin Meteier, Patrick Luethold, Alain Chavaillaz","doi":"10.16910/jemr.13.5.6","DOIUrl":"10.16910/jemr.13.5.6","url":null,"abstract":"<p><p>In previous research, microsaccades have been suggested as psychophysiological indicators of task load. So far, it is still under debate how different types of task demands are influencing microsaccade rate. This piece of research examines the relation between visual load, mental load and microsaccade rate. Fourteen participants carried out a continuous performance task (n-back), in which visual (letters vs. abstract figures) and mental task load (1-back to 4-back) were manipulated as within-subjects variables. Eye tracking data, performance data as well as subjective workload were recorded. Data analysis revealed an increased level of microsaccade rate for stimuli of high visual demand (i.e. abstract figures), while mental demand (n-back-level) did not modulate microsaccade rate. In conclusion, the present results suggest that microsaccade rate reflects visual load of a task rather than its mental load.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8188521/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Journal of Eye Movement Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1