Mental fatigue is known to occur as a result of activities related to e.g. transportation, health-care, military operations and numerous other cognitive demanding tasks. Gaze tracking has wide-ranging applications, with the technology becoming more compact and processing power reducing. Though numerous techniques have been applied to measure mental fatigue using gaze tracking, smooth-pursuit movement, a natural eye movement generated when following a moving object with gaze, has not been explored with relation to mental fatigue. In this paper, we report the results from a smooth-pursuit movement based eye-typing experiment with varying task difficulty to generate cognitive load, performed in the morning and afternoon by 36 participants. We have investigated the effects of time-on-task and time of day on mental fatigue using self-reported questionnaires and smooth-pursuit performance, extracted from the gaze data. The self-reported mental fatigue increased due to time-on-task, but the time of day did not have an effect. The results illustrate that smooth-pursuit movement performance declined with time-on-task, with increased error in the gaze position and an inability to match the speed of the moving object. The findings exhibit the feasibility of mental fatigue detection using smoothpursuit movements during an eye-interactive task of eye-typing.
Research on visual expertise has progressed significantly due to the availability of eye tracking tools. However, attempts to bring together research on expertise and eye tracking methodology provoke several challenges, because visual information processes should be studied in authentic and domain-specific environments. Among the barriers to designing appropriate research are the proper definition of levels of expertise, the tension between internal (experimental control) and external (authentic environments) validity, and the appropriate methodology to study eye movements in a three-dimensional environment. This exploratory study aims to address these challenges and to provide an adequate research setting by investigating visual expertise in sculpting. Eye movements and gaze patterns of 20 participants were investigated while looking at two sculptures in a museum. The participants were assigned to four different groups based on their level of expertise (laypersons, novices, semi-experts, experts). Using mobile eye tracking, the following parameters were measured: number of fixations, duration of fixation, dwell time in relevant areas, and revisits in relevant areas. Moreover, scan paths were analysed using the eyenalysis approach. Conclusions are drawn on both the nature of visual expertise in sculpting and the potential (and limitations) of empirical designs that aim to investigate expertise in authentic environments.
About ECEM ECEM was initiated by Rudolf Groner (Bern), Dieter Heller (Bayreuth at the time) and Henk Breimer (Tilburg) in the 198 to provide a forum for an interdisciplinary group of scientists interested in eye movements. Since the inaugural meeting in Bern, the conference has been held every two years in different venues across Europe until 2021, when it was planned to take place in Leicester but was cancelled due to the COVID pandemic. It was decided to hold the meeting in Leicester in August 2022 instead, and as an in person meeting rather than an online or hybrid event. Incidentally, the present meeting is the third time the conference has come to the English East Midlands, now in Leicester following previous meetings in the neighbouring cities of Derby and Nottingham. The sites of previous ECEMs and webpages can be found here..
Our objective is to analyze scanpaths acquired through participants achieving a reading task aiming at answering a binary question: Is the text related or not to some given target topic? We propose a data-driven method based on hidden semi-Markov chains to segment scanpaths into phases deduced from the model states, which are shown to represent different cognitive strategies: normal reading, fast reading, information search, and slow confirmation. These phases were confirmed using different external covariates, among which semantic information extracted from texts. Analyses highlighted some strong preference of specific participants for specific strategies and more globally, large individual variability in eye-movement characteristics, as accounted for by random effects. As a perspective, the possibility of improving reading models by accounting for possible heterogeneity sources during reading is discussed.
The assessment of the visual field in young children continues to be a challenge. Children often do not sit still, fail to fixate stimuli for longer durations, and have limited verbal capacity to report visibility. Therefore, we introduced a head-mounted VR display with gazecontingent flicker pupil perimetry (VRgcFPP). We presented large flickering patches at different eccentricities and angles in the periphery to evoke pupillary oscillations, and three fixation stimulus conditions to determine best practices for optimal fixation and pupil response quality. A total of twenty children (3-11y) passively fixated a dot, counted the repeated appearance of an animated character (counting task), and watched an animated movie in separate trials of 80s each (20 patch locations, 4s per location). The results showed that gaze precision and accuracy did not differ significantly across the fixation conditions but pupil amplitudes were strongest for the dot and count task. The VR set-up appears to be an ideal apparatus for children to allow free range of movement, an engaging visual task, and reliable eye measurements. We recommend the use of the fixation counting task for pupil perimetry because children enjoyed it the most and it achieved strongest pupil responses.
Eye-tracking allows researchers to infer cognitive processes from eye movements that are classified into distinct events. Parsing the events is typically done by algorithms. Here we aim at developing an unsupervised, generative model that can be fitted to eye-movement data using maximum likelihood estimation. This approach allows hypothesis testing about fitted models, next to being a method for classification. We developed gazeHMM, an algorithm that uses a hidden Markov model as a generative model, has few critical parameters to be set by users, and does not require human coded data as input. The algorithm classifies gaze data into fixations, saccades, and optionally postsaccadic oscillations and smooth pursuits. We evaluated gazeHMM's performance in a simulation study, showing that it successfully recovered hidden Markov model parameters and hidden states. Parameters were less well recovered when we included a smooth pursuit state and/or added even small noise to simulated data. We applied generative models with different numbers of events to benchmark data. Comparing them indicated that hidden Markov models with more events than expected had most likely generated the data. We also applied the full algorithm to benchmark data and assessed its similarity to human coding and other algorithms. For static stimuli, gazeHMM showed high similarity and outperformed other algorithms in this regard. For dynamic stimuli, gazeHMM tended to rapidly switch between fixations and smooth pursuits but still displayed higher similarity than most other algorithms. Concluding that gazeHMM can be used in practice, we recommend parsing smooth pursuits only for exploratory purposes. Future hidden Markov model algorithms could use covariates to better capture eye movement processes and explicitly model event durations to classify smooth pursuits more accurately.
Background: For many years it has been studied how eye movements influence reading and learning ability. The objective of this study is to determine the relationships between the different publications and authors. As well as to identify the different areas of research ocular movement.; Methods: Web of Science was the database for the search of publications for the period 1900 to May 2021, using the terms: "Eye movement" AND "Academic achiev*". The analysis of the publication was performed using the CitNetExplorer, VOSviewer and CiteSpace software.; Results: 4391 publications and 11033 citation networks were found. The year with the most publications is 2018, a total of 318 publications and 10 citation networks. The most cited publication was "Saccade target selection and object recognition: evidence for a common attentional mechanism." published by Deubel et al. in 1999, with a citation index of 214. Using the Clustering function, nine groups were found that cover the main research areas in this field: neurological, age, perceptual attention, visual disturbances, sports, driving, sleep, vision therapy and academic performance.; Conclusion: Even being a multidisciplinary field of study, the topic with the most publications to date is the visual search procedure at the neurological level.