Pub Date : 2021-07-13eCollection Date: 2021-01-01DOI: 10.16910/jemr.14.2.5
Liu Xin, Zheng Bin, Duan Xiaoqin, He Wenjing, Li Yuandong, Zhao Jinyu, Zhao Chen, Wang Lin
Eye-tracking can help decode the intricate control mechanism in human performance. In healthcare, physicians-in-training require extensive practice to improve their healthcare skills. When a trainee encounters any difficulty in the practice, they will need feedback from experts to improve their performance. Personal feedback is time-consuming and subjected to bias. In this study, we tracked the eye movements of trainees during their colonoscopic performance in simulation. We examined changes in eye movement behavior during the moments of navigation loss (MNL), a signature sign for task difficulty during colonoscopy, and tested whether deep learning algorithms can detect the MNL by feeding data from eye-tracking. Human eye gaze and pupil characteristics were learned and verified by the deep convolutional generative adversarial networks (DCGANs); the generated data were fed to the Long Short-Term Memory (LSTM) networks with three different data feeding strategies to classify MNLs from the entire colonoscopic procedure. Outputs from deep learning were compared to the expert's judgment on the MNLs based on colonoscopic videos. The best classification outcome was achieved when we fed human eye data with 1000 synthesized eye data, where accuracy (91.80%), sensitivity (90.91%), and specificity (94.12%) were optimized. This study built an important foundation for our work of developing an education system for training healthcare skills using simulation.
{"title":"Detecting Task Difficulty of Learners in Colonoscopy: Evidence from Eye-Tracking.","authors":"Liu Xin, Zheng Bin, Duan Xiaoqin, He Wenjing, Li Yuandong, Zhao Jinyu, Zhao Chen, Wang Lin","doi":"10.16910/jemr.14.2.5","DOIUrl":"https://doi.org/10.16910/jemr.14.2.5","url":null,"abstract":"<p><p>Eye-tracking can help decode the intricate control mechanism in human performance. In healthcare, physicians-in-training require extensive practice to improve their healthcare skills. When a trainee encounters any difficulty in the practice, they will need feedback from experts to improve their performance. Personal feedback is time-consuming and subjected to bias. In this study, we tracked the eye movements of trainees during their colonoscopic performance in simulation. We examined changes in eye movement behavior during the moments of navigation loss (MNL), a signature sign for task difficulty during colonoscopy, and tested whether deep learning algorithms can detect the MNL by feeding data from eye-tracking. Human eye gaze and pupil characteristics were learned and verified by the deep convolutional generative adversarial networks (DCGANs); the generated data were fed to the Long Short-Term Memory (LSTM) networks with three different data feeding strategies to classify MNLs from the entire colonoscopic procedure. Outputs from deep learning were compared to the expert's judgment on the MNLs based on colonoscopic videos. The best classification outcome was achieved when we fed human eye data with 1000 synthesized eye data, where accuracy (91.80%), sensitivity (90.91%), and specificity (94.12%) were optimized. This study built an important foundation for our work of developing an education system for training healthcare skills using simulation.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8327395/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39273168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John Semmlow, Chang Yaramothu, Mitchell Scheiman, Tara L Alvarez
Introduction: Previous studies have shown that the slow, or fusion sustaining, component of disparity vergence contains oscillatory behavior as would be expected if fusion is sustained by visual feedback. This study extends the examination of this behavior to a wider range of frequencies and a larger number of subjects.
Methods: Disparity vergence responses to symmetrical 4.0 deg step changes in target position were recorded in 20 subjects. Approximately three seconds of the late component of each response were isolated using interactive graphics and the frequency spectrum calculated. Peaks in these spectra associated with oscillatory behavior were identified and examined.
Results: All subjects exhibited oscillatory behavior with fundamental frequencies ranging between 0.37 and 0.55 Hz; much lower than those identified in the earlier study. All responses showed significant higher frequency components. The relationship between higher frequency components and the fundamental frequency suggest may be harmonics. A correlation was found across subjects between the amplitude of the fundamental frequency and the maximum velocity of the fusion initiating component probably due to the gain of shared neural pathways.
Conclusion: Low frequency oscillatory behavior was found in all subjects adding support that the slow, or fusion sustaining, component is mediated by a feedback control.
{"title":"Vergence Fusion Sustaining Oscillations.","authors":"John Semmlow, Chang Yaramothu, Mitchell Scheiman, Tara L Alvarez","doi":"10.16910/jemr.14.1.4","DOIUrl":"https://doi.org/10.16910/jemr.14.1.4","url":null,"abstract":"<p><strong>Introduction: </strong>Previous studies have shown that the slow, or fusion sustaining, component of disparity vergence contains oscillatory behavior as would be expected if fusion is sustained by visual feedback. This study extends the examination of this behavior to a wider range of frequencies and a larger number of subjects.</p><p><strong>Methods: </strong>Disparity vergence responses to symmetrical 4.0 deg step changes in target position were recorded in 20 subjects. Approximately three seconds of the late component of each response were isolated using interactive graphics and the frequency spectrum calculated. Peaks in these spectra associated with oscillatory behavior were identified and examined.</p><p><strong>Results: </strong>All subjects exhibited oscillatory behavior with fundamental frequencies ranging between 0.37 and 0.55 Hz; much lower than those identified in the earlier study. All responses showed significant higher frequency components. The relationship between higher frequency components and the fundamental frequency suggest may be harmonics. A correlation was found across subjects between the amplitude of the fundamental frequency and the maximum velocity of the fusion initiating component probably due to the gain of shared neural pathways.</p><p><strong>Conclusion: </strong>Low frequency oscillatory behavior was found in all subjects adding support that the slow, or fusion sustaining, component is mediated by a feedback control.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8247062/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39149463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-11eCollection Date: 2021-01-01DOI: 10.16910/jemr.14.2.4
Benedict C O F Fehringer
The Index of Cognitive Activity (ICA) and its open-source alternative, the Index of Pupillary Activity (IPA), are pupillary-based indicators for cognitive workload and are independent of light changes. Both indicators were investigated regarding influences of cognitive demand, fatigue and inter-individual differences. In addition, the variability of pupil changes between both eyes (difference values) were compared with the usually calculated pupillary changes averaged over both eyes (mean values). Fifty-five participants performed a spatial thinking test, the R-Cube-Vis Test, with six distinct difficulty levels and a simple fixation task before and after the R-Cube-Vis Test. The distributions of the ICA and IPA were comparable. The ICA/IPA values were lower during the simple fixation tasks than during the cognitively demanding R-Cube-Vis Test. A fatigue effect was found only for the mean ICA values. The effects of both indicators were larger between difficulty levels of the test when inter-individual differences were controlled using z-standardization. The difference values seemed to control for fatigue and appeared to differentiate better between more demanding cognitive tasks than the mean values. The derived recommendations for the ICA/IPA values are beneficial to gain more insights in individual performance and behavior during, e.g., training and testing scenarios.
{"title":"Optimizing the usage of pupillary based indicators for cognitive workload.","authors":"Benedict C O F Fehringer","doi":"10.16910/jemr.14.2.4","DOIUrl":"https://doi.org/10.16910/jemr.14.2.4","url":null,"abstract":"<p><p>The Index of Cognitive Activity (ICA) and its open-source alternative, the Index of Pupillary Activity (IPA), are pupillary-based indicators for cognitive workload and are independent of light changes. Both indicators were investigated regarding influences of cognitive demand, fatigue and inter-individual differences. In addition, the variability of pupil changes between both eyes (difference values) were compared with the usually calculated pupillary changes averaged over both eyes (mean values). Fifty-five participants performed a spatial thinking test, the R-Cube-Vis Test, with six distinct difficulty levels and a simple fixation task before and after the R-Cube-Vis Test. The distributions of the ICA and IPA were comparable. The ICA/IPA values were lower during the simple fixation tasks than during the cognitively demanding R-Cube-Vis Test. A fatigue effect was found only for the mean ICA values. The effects of both indicators were larger between difficulty levels of the test when inter-individual differences were controlled using z-standardization. The difference values seemed to control for fatigue and appeared to differentiate better between more demanding cognitive tasks than the mean values. The derived recommendations for the ICA/IPA values are beneficial to gain more insights in individual performance and behavior during, e.g., training and testing scenarios.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8299071/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39220195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lee Friedman, Dillon Lohr, Timothy Hanson, Oleg V Komogortsev
Typically, the position error of an eye-tracking device is measured as the distance of the eye-position from the target position in two-dimensional space (angular offset). Accuracy is the mean angular offset. The mean is a highly interpretable measure of central tendency if the underlying error distribution is unimodal and normal. However, in the context of an underlying multimodal distribution, the mean is less interpretable. We will present evidence that the majority of such distributions are multimodal. Only 14.7% of fixation angular offset distributions were unimodal, and of these, only 11.5% were normally distributed. (Of the entire dataset, 1.7% were unimodal and normal.) This multimodality is true even if there is only a single, continuous tracking fixation segment per trial. We present several approaches to measure accuracy in the face of multimodality. We also address the role of fixation drift in partially explaining multimodality.
{"title":"Angular Offset Distributions During Fixation Are, More Often Than Not, Multimodal.","authors":"Lee Friedman, Dillon Lohr, Timothy Hanson, Oleg V Komogortsev","doi":"10.16910/jemr.14.3.2","DOIUrl":"10.16910/jemr.14.3.2","url":null,"abstract":"<p><p>Typically, the position error of an eye-tracking device is measured as the distance of the eye-position from the target position in two-dimensional space (angular offset). Accuracy is the mean angular offset. The mean is a highly interpretable measure of central tendency if the underlying error distribution is unimodal and normal. However, in the context of an underlying multimodal distribution, the mean is less interpretable. We will present evidence that the majority of such distributions are multimodal. Only 14.7% of fixation angular offset distributions were unimodal, and of these, only 11.5% were normally distributed. (Of the entire dataset, 1.7% were unimodal and normal.) This multimodality is true even if there is only a single, continuous tracking fixation segment per trial. We present several approaches to measure accuracy in the face of multimodality. We also address the role of fixation drift in partially explaining multimodality.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8189800/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39010217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The control of technological systems by human operators has been the object of study for many decades. The increasing complexity in the digital age has made the optimization of the interaction between system and human operator particularly necessary. In the present thematic issue, ten exemplary articles are presented, ranging from observational field studies to experimental work in highly complex navigation simulators. For the human operator, the processes of attention play a crucial role, which are captured in the contributions listed in this thematic issue by eye-tracking devices. For many decades, eye tracking during car driving has been investigated extensively (e.g. 6; 5). In the present special issue, Cvahte Ojsteršek & Topolšek (4) provide a literature review and scientometric analysis of 139 eye-tracking studies investigating driver distraction. For future studies, the authors recommend a wider variety of distractor stimuli, a larger number of tested participants, and an increasing interdisciplinarity of researchers. In addition to most studies investigating bottom-up processes of covered attention, Tuhkanen, Pekkanen, Lehtonen & Lappi (10) include the experimental control of top-down processes of overt attention in an active visuomotor steering task. The results indicate a bottom-up process of biasing the optic flow of the stimulus input in interaction with the top-down saccade planning induced by the steering task. An expanding area of technological development involves autonomous driving where actions of the human operator directly interact with the programmed reactions of the vehicle. Autonomous driving requires, however, a broader exploration of the entire visual input and less gaze directed towards the road centre. Schnebelen, Charron & Mars (9) conducted experimental research in this area and concluded that gaze dynamics played the most important role in distinguishing between manual and automated driving. Through a combination of advanced gaze tracking systems with the latest vehicle environment sensors, Bickerdt, Wendland, Geisler, Sonnenberg & Kasneci (2021) conducted a study with 50 participants in a driving simulator and propose a novel way to determine perceptual limits which are applicable to realistic driving scenarios. Eye-Computer-Interaction (ECI) is an interactive method of directly controlling a technological device by means of ocular parameters. In this context, Niu, Gao, Xue, Zhang & Yang (8) conducted two experiments to explore the optimum target size and gaze-triggering dwell time in ECI. Their results have an exemplary application value for future interface design. Aircraft training and pilot selection is commonly performed on simulators. This makes it possible to study human capabilities and their limitation in interaction with the simulated technological system. Based on their methodological developments and experimental results, Vlačić, Knežević, Mandal, Rođenkov & Vitsas (11) propose a network approach w
他们的共同目标是,在数字时代,根据人类的能力和局限性,提高技术的潜力和安全性。
{"title":"Eye movements in real and simulated driving and navigation control - Foreword to the Special Issue.","authors":"Rudolf Groner, Enkelejda Kasneci","doi":"10.16910/jemr.12.3.0","DOIUrl":"10.16910/jemr.12.3.0","url":null,"abstract":"<p><p>The control of technological systems by human operators has been the object of study for many decades. The increasing complexity in the digital age has made the optimization of the interaction between system and human operator particularly necessary. In the present thematic issue, ten exemplary articles are presented, ranging from observational field studies to experimental work in highly complex navigation simulators. For the human operator, the processes of attention play a crucial role, which are captured in the contributions listed in this thematic issue by eye-tracking devices. For many decades, eye tracking during car driving has been investigated extensively (e.g. 6; 5). In the present special issue, Cvahte Ojsteršek & Topolšek (4) provide a literature review and scientometric analysis of 139 eye-tracking studies investigating driver distraction. For future studies, the authors recommend a wider variety of distractor stimuli, a larger number of tested participants, and an increasing interdisciplinarity of researchers. In addition to most studies investigating bottom-up processes of covered attention, Tuhkanen, Pekkanen, Lehtonen & Lappi (10) include the experimental control of top-down processes of overt attention in an active visuomotor steering task. The results indicate a bottom-up process of biasing the optic flow of the stimulus input in interaction with the top-down saccade planning induced by the steering task. An expanding area of technological development involves autonomous driving where actions of the human operator directly interact with the programmed reactions of the vehicle. Autonomous driving requires, however, a broader exploration of the entire visual input and less gaze directed towards the road centre. Schnebelen, Charron & Mars (9) conducted experimental research in this area and concluded that gaze dynamics played the most important role in distinguishing between manual and automated driving. Through a combination of advanced gaze tracking systems with the latest vehicle environment sensors, Bickerdt, Wendland, Geisler, Sonnenberg & Kasneci (2021) conducted a study with 50 participants in a driving simulator and propose a novel way to determine perceptual limits which are applicable to realistic driving scenarios. Eye-Computer-Interaction (ECI) is an interactive method of directly controlling a technological device by means of ocular parameters. In this context, Niu, Gao, Xue, Zhang & Yang (8) conducted two experiments to explore the optimum target size and gaze-triggering dwell time in ECI. Their results have an exemplary application value for future interface design. Aircraft training and pilot selection is commonly performed on simulators. This makes it possible to study human capabilities and their limitation in interaction with the simulated technological system. Based on their methodological developments and experimental results, Vlačić, Knežević, Mandal, Rođenkov & Vitsas (11) propose a network approach w","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8182438/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Persian is an Indo-Iranian language that features a derivation of Arabic cursive script, where most letters within words are connectable to adjacent letters with ligatures. Two experiments are reported where the properties of Persian script were utilized to investigate the effects of reducing interword spacing and increasing the interletter distance (ligature) within a word. Experiment 1 revealed that decreasing interword spacing while extending interletter ligature by the same amount was detrimental to reading speed. Experiment 2 largely replicated these findings. The experiments show that providing the readers with inaccurate word boundary information is detrimental to reading rate. This was achieved by reducing the interword space that follows letters that do not connect to the next letter in Experiment 1, and replacing the interword space with ligature that connected the words in Experiment 2. In both experiments, readers were able to comprehend the text read, despite the considerable costs to reading rates in the experimental conditions.
{"title":"Manipulating Interword and Interletter Spacing in Cursive Script: An Eye Movements Investigation of Reading Persian.","authors":"Ehab W Hermena","doi":"10.16910/jemr.14.1.6","DOIUrl":"https://doi.org/10.16910/jemr.14.1.6","url":null,"abstract":"<p><p>Persian is an Indo-Iranian language that features a derivation of Arabic cursive script, where most letters within words are connectable to adjacent letters with ligatures. Two experiments are reported where the properties of Persian script were utilized to investigate the effects of reducing interword spacing and increasing the interletter distance (ligature) within a word. Experiment 1 revealed that decreasing interword spacing while extending interletter ligature by the same amount was detrimental to reading speed. Experiment 2 largely replicated these findings. The experiments show that providing the readers with inaccurate word boundary information is detrimental to reading rate. This was achieved by reducing the interword space that follows letters that do not connect to the next letter in Experiment 1, and replacing the interword space with ligature that connected the words in Experiment 2. In both experiments, readers were able to comprehend the text read, despite the considerable costs to reading rates in the experimental conditions.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8189716/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cengiz Acarturk, Bipin Indurkya, Piotr Nawrocki, Bartlomiej Sniezynski, Mateusz Jarosz, Kerem Alp Usal
We report the results of an empirical study on gaze aversion during dyadic human-to-human conversation in an interview setting. To address various methodological challenges in assessing gaze-to-face contact, we followed an approach where the experiment was conducted twice, each time with a different set of interviewees. In one of them the interviewer's gaze was tracked with an eye tracker, and in the other the interviewee's gaze was tracked. The gaze sequences obtained in both experiments were analyzed and modeled as Discrete-Time Markov Chains. The results show that the interviewer made more frequent and longer gaze contacts compared to the interviewee. Also, the interviewer made mostly diagonal gaze aversions, whereas the interviewee made sideways aversions (left or right). We discuss the relevance of this research for Human-Robot Interaction, and discuss some future research problems.
{"title":"Gaze aversion in conversational settings: An investigation based on mock job interview.","authors":"Cengiz Acarturk, Bipin Indurkya, Piotr Nawrocki, Bartlomiej Sniezynski, Mateusz Jarosz, Kerem Alp Usal","doi":"10.16910/jemr.14.1.1","DOIUrl":"https://doi.org/10.16910/jemr.14.1.1","url":null,"abstract":"<p><p>We report the results of an empirical study on gaze aversion during dyadic human-to-human conversation in an interview setting. To address various methodological challenges in assessing gaze-to-face contact, we followed an approach where the experiment was conducted twice, each time with a different set of interviewees. In one of them the interviewer's gaze was tracked with an eye tracker, and in the other the interviewee's gaze was tracked. The gaze sequences obtained in both experiments were analyzed and modeled as Discrete-Time Markov Chains. The results show that the interviewer made more frequent and longer gaze contacts compared to the interviewee. Also, the interviewer made mostly diagonal gaze aversions, whereas the interviewee made sideways aversions (left or right). We discuss the relevance of this research for Human-Robot Interaction, and discuss some future research problems.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8188832/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Felix S Wang, Julian Wolf, Mazda Farshad, Mirko Meboldt, Quentin Lohmeyer
Eye tracking (ET) has shown to reveal the wearer's cognitive processes using the measurement of the central point of foveal vision. However, traditional ET evaluation methods have not been able to take into account the wearers' use of the peripheral field of vision. We propose an algorithmic enhancement to a state-of-the-art ET analysis method, the Object- Gaze Distance (OGD), which additionally allows the quantification of near-peripheral gaze behavior in complex real-world environments. The algorithm uses machine learning for area of interest (AOI) detection and computes the minimal 2D Euclidean pixel distance to the gaze point, creating a continuous gaze-based time-series. Based on an evaluation of two AOIs in a real surgical procedure, the results show that a considerable increase of interpretable fixation data from 23.8 % to 78.3 % of AOI screw and from 4.5 % to 67.2 % of AOI screwdriver was achieved, when incorporating the near-peripheral field of vision. Additionally, the evaluation of a multi-OGD time series representation has shown the potential to reveal novel gaze patterns, which may provide a more accurate depiction of human gaze behavior in multi-object environments.
{"title":"Object-Gaze Distance: Quantifying Near- Peripheral Gaze Behavior in Real-World Applications.","authors":"Felix S Wang, Julian Wolf, Mazda Farshad, Mirko Meboldt, Quentin Lohmeyer","doi":"10.16910/jemr.14.1.5","DOIUrl":"https://doi.org/10.16910/jemr.14.1.5","url":null,"abstract":"<p><p>Eye tracking (ET) has shown to reveal the wearer's cognitive processes using the measurement of the central point of foveal vision. However, traditional ET evaluation methods have not been able to take into account the wearers' use of the peripheral field of vision. We propose an algorithmic enhancement to a state-of-the-art ET analysis method, the Object- Gaze Distance (OGD), which additionally allows the quantification of near-peripheral gaze behavior in complex real-world environments. The algorithm uses machine learning for area of interest (AOI) detection and computes the minimal 2D Euclidean pixel distance to the gaze point, creating a continuous gaze-based time-series. Based on an evaluation of two AOIs in a real surgical procedure, the results show that a considerable increase of interpretable fixation data from 23.8 % to 78.3 % of AOI screw and from 4.5 % to 67.2 % of AOI screwdriver was achieved, when incorporating the near-peripheral field of vision. Additionally, the evaluation of a multi-OGD time series representation has shown the potential to reveal novel gaze patterns, which may provide a more accurate depiction of human gaze behavior in multi-object environments.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8189527/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daria Ivanchenko, Katharina Rifai, Ziad M Hafed, Frank Schaeffel
We describe a high-performance, pupil-based binocular eye tracker that approaches the performance of a well-established commercial system, but at a fraction of the cost. The eye tracker is built from standard hardware components, and its software (written in Visual C++) can be easily implemented. Because of its fast and simple linear calibration scheme, the eye tracker performs best in the central 10 degrees of the visual field. The eye tracker possesses a number of useful features: (1) automated calibration simultaneously in both eyes while subjects fixate four fixation points sequentially on a computer screen, (2) automated realtime continuous analysis of measurement noise, (3) automated blink detection, (4) and realtime analysis of pupil centration artifacts. This last feature is critical because it is known that pupil diameter changes can be erroneously registered by pupil-based trackers as a change in eye position. We evaluated the performance of our system against that of a wellestablished commercial system using simultaneous measurements in 10 participants. We propose our low-cost eye tracker as a promising resource for studies of binocular eye movements.
{"title":"A low-cost, high-performance video-based binocular eye tracker for psychophysical research.","authors":"Daria Ivanchenko, Katharina Rifai, Ziad M Hafed, Frank Schaeffel","doi":"10.16910/jemr.14.3.3","DOIUrl":"https://doi.org/10.16910/jemr.14.3.3","url":null,"abstract":"<p><p>We describe a high-performance, pupil-based binocular eye tracker that approaches the performance of a well-established commercial system, but at a fraction of the cost. The eye tracker is built from standard hardware components, and its software (written in Visual C++) can be easily implemented. Because of its fast and simple linear calibration scheme, the eye tracker performs best in the central 10 degrees of the visual field. The eye tracker possesses a number of useful features: (1) automated calibration simultaneously in both eyes while subjects fixate four fixation points sequentially on a computer screen, (2) automated realtime continuous analysis of measurement noise, (3) automated blink detection, (4) and realtime analysis of pupil centration artifacts. This last feature is critical because it is known that pupil diameter changes can be erroneously registered by pupil-based trackers as a change in eye position. We evaluated the performance of our system against that of a wellestablished commercial system using simultaneous measurements in 10 participants. We propose our low-cost eye tracker as a promising resource for studies of binocular eye movements.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8190563/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39010218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrea Schneider, Andreas Sonderegger, Eva Krueger, Quentin Meteier, Patrick Luethold, Alain Chavaillaz
In previous research, microsaccades have been suggested as psychophysiological indicators of task load. So far, it is still under debate how different types of task demands are influencing microsaccade rate. This piece of research examines the relation between visual load, mental load and microsaccade rate. Fourteen participants carried out a continuous performance task (n-back), in which visual (letters vs. abstract figures) and mental task load (1-back to 4-back) were manipulated as within-subjects variables. Eye tracking data, performance data as well as subjective workload were recorded. Data analysis revealed an increased level of microsaccade rate for stimuli of high visual demand (i.e. abstract figures), while mental demand (n-back-level) did not modulate microsaccade rate. In conclusion, the present results suggest that microsaccade rate reflects visual load of a task rather than its mental load.
{"title":"The interplay between task difficulty and microsaccade rate: Evidence for the critical role of visual load.","authors":"Andrea Schneider, Andreas Sonderegger, Eva Krueger, Quentin Meteier, Patrick Luethold, Alain Chavaillaz","doi":"10.16910/jemr.13.5.6","DOIUrl":"10.16910/jemr.13.5.6","url":null,"abstract":"<p><p>In previous research, microsaccades have been suggested as psychophysiological indicators of task load. So far, it is still under debate how different types of task demands are influencing microsaccade rate. This piece of research examines the relation between visual load, mental load and microsaccade rate. Fourteen participants carried out a continuous performance task (n-back), in which visual (letters vs. abstract figures) and mental task load (1-back to 4-back) were manipulated as within-subjects variables. Eye tracking data, performance data as well as subjective workload were recorded. Data analysis revealed an increased level of microsaccade rate for stimuli of high visual demand (i.e. abstract figures), while mental demand (n-back-level) did not modulate microsaccade rate. In conclusion, the present results suggest that microsaccade rate reflects visual load of a task rather than its mental load.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8188521/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}