This paper discusses the design and development of a low-cost virtual reality (VR) based flight simulator with cognitive load estimation feature using ocular and EEG signals. Focus is on exploring methods to evaluate pilot's interactions with aircraft by means of quantifying pilot's perceived cognitive load under different task scenarios. Realistic target tracking and context of the battlefield is designed in VR. Head mounted eye gaze tracker and EEG headset are used for acquiring pupil diameter, gaze fixation, gaze direction and EEG theta, alpha, and beta band power data in real time. We developed an AI agent model in VR and created scenarios of interactions with the piloted aircraft. To estimate the pilot's cognitive load, we used low-frequency pupil diameter variations, fixation rate, gaze distribution pattern, EEG signal-based task load index and EEG task engagement index. We compared the physiological measures of workload with the standard user's inceptor control-based workload metrics. Results of the piloted simulation study indicate that the metrics discussed in the paper have strong association with pilot's perceived task difficulty.
In recent years, innovative multiparty eye tracking setups have been introduced to synchronously capture eye movements of multiple individuals engaged in computer-mediated collaboration. Despite its great potential for studying cognitive processes within groups, the method was primarily used as an interactive tool to enable and evaluate shared gaze visualizations in remote interaction. We conducted a systematic literature review to provide a comprehensive overview of what to consider when using multiparty eye tracking as a diagnostic method in experiments and how to process the collected data to compute and analyze group-level metrics. By synthesizing our findings in an integrative conceptual framework, we identified fundamental requirements for a meaningful implementation. In addition, we derived several implications for future research, as multiparty eye tracking was mainly used to study the correlation between joint attention and task performance in dyadic interaction. We found multidimensional recurrence quantification analysis, a novel method to quantify group-level dynamics in physiological data, to be a promising procedure for addressing some of the highlighted research gaps. In particular, the computation method enables scholars to investigate more complex cognitive processes within larger groups, as it scales up to multiple data streams.
Combining eye tracking and virtual reality (VR) is a promising approach to tackle various applied research questions. As this approach is relatively new, routines are not established yet and the first steps can be full of potential pitfalls. The present paper gives a practice example to lower the boundaries for getting started. More specifically, I focus on an affordable add-on technology, the Pupil Labs eye tracking add-on for the HTC Vive. As add-on technology with all relevant source code available on GitHub, a high degree of freedom in preprocessing, visualizing, and analyzing eye tracking data in VR can be achieved. At the same time, some extra preparatory steps for the setup of hardware and software are necessary. Therefore, specifics of eye tracking in VR from unboxing, software integration, and procedures to analyzing the data and maintaining the hardware will be addressed. The Pupil Labs eye tracking add-on for the HTC Vive represents a highly transparent approach to existing alternatives. Characteristics of eye tracking in VR in contrast to other headmounded and remote eye trackers applied in the physical world will be discussed. In conclusion, the paper contributes to the idea of open science in two ways: First, by making the necessary routines transparent and therefore reproducible. Second, by stressing the benefits of using open source software.
The control of eye gaze is critical to the execution of many skills. The observation that task experts in many domains exhibit more efficient control of eye gaze than novices has led to the development of gaze training interventions that teach these behaviours. We aimed to extend this literature by i) examining the relative benefits of feed-forward (observing an expert's eye movements) versus feed-back (observing your own eye movements) training, and ii) automating this training within virtual reality. Serving personnel from the British Army and Royal Navy were randomised to either feed-forward or feed-back training within a virtual reality simulation of a room search and clearance task. Eye movement metrics - including visual search, saccade direction, and entropy - were recorded to quantify the efficiency of visual search behaviours. Feed-forward and feed-back eye movement training produced distinct learning benefits, but both accelerated the development of efficient gaze behaviours. However, we found no evidence that these more efficient search behaviours transferred to better decision making in the room clearance task. Our results suggest integrating eye movement training principles within virtual reality training simulations may be effective, but further work is needed to understand the learning mechanisms.
Mobile eye tracking helps to investigate real-world settings, in which participants can move freely. This enhances the studies' ecological validity but poses challenges for the analysis. Often, the 3D stimulus is reduced to a 2D image (reference view) and the fixations are manually mapped to this 2D image. This leads to a loss of information about the three-dimensionality of the stimulus. Using several reference images, from different perspectives, poses new problems, in particular concerning the mapping of fixations in the transition areas between two reference views. A newly developed approach (MAP3D) is presented that enables generating a 3D model and automatic mapping of fixations to this virtual 3D model of the stimulus. This avoids problems with the reduction to a 2D reference image and with transitions between images. The x, y and z coordinates of the fixations are available as a point cloud and as .csv output. First exploratory application and evaluation tests are promising: MAP3D offers innovative ways of post-hoc mapping fixation data on 3D stimuli with open-source software and thus provides cost-efficient new avenues for research.
We compared the performance of two smooth-pursuit-based object selection algorithms in Virtual Reality (VR). To assess the best algorithm for a range of configurations, we systematically varied the number of targets to choose from, their distance, and their movement pattern (linear and circular). Performance was operationalized as the ratio of hits, misses and non-detections. Averaged over all distances, the correlation-based algorithm performed better for circular movement patterns compared to linear ones (F(1,11) = 24.27, p < .001, η² = .29). This was not found for the difference-based algorithm (F(1,11) = 0.98, p = .344, η² = .01). Both algorithms performed better in close distances compared to larger ones (F(1,11) = 190.77, p < .001, η² = .75 correlation-based, and F(1,11) = 148.20, p < .001, η² = .42, difference-based). An interaction effect for distance x movement emerged. After systematically varying the number of targets, these results could be replicated, with a slightly smaller effect. Based on performance levels, we introduce the concept of an optimal threshold algorithm, suggesting the best detection algorithm for the individual target configuration. Learnings of adding the third dimension to the detection algorithms and the role of distractors are discussed and suggestions for future research added.
This study examines short-term improvement of music performances and oculomotor behaviour during four successive executions of a brief musical piece composed by Bartók, "Slovak Boys' Dance". Pianists (n=22) were allowed to practice for two minutes between each trial. Eye-tracking data were collected as well as MIDI information from pianists' performances. Cognitive skills were assessed by a spatial memory test and a reading span test. Principal component analysis (PCA) enabled us to distinguish two axes, one associated with anticipation and the other with dependence/independence on written code. The effect of musical structure, determined by the emergence of different sections in the score, was observed in all the dependent variables selected from the PCA; we also observed the effect of practice on the number of fixations, the number of glances at the keyboard (GAK) and the awareness span. Pianist expertise was associated with fewer fixations and GAK, better anticipation capacities and more effective strategies for visual monitoring of motor movements. The significant correlations observed between the reading span test and GAK duration highlight the challenge of working memory involvement during music reading.
The eyes are in constant movement to optimize the interpretation of the visual scene by the brain. Eye movements are controlled by complex neural networks that interact with the rest of the brain. The direction of our eye movements could thus be influenced by our cognitive activity (imagination, internal dialogue, memory, etc.). A given cognitive activity could then cause the gaze to move in a specific direction (a brief movement that would be instinctive and unconscious). Neuro Linguistic Programming (NLP), which was developed in the 1970s by Richard Bandler and John Grinder (psychologist and linguist respectively), issued a comprehensive theory associating gaze directions with specific mental tasks. According to this theory, depending on the visual path observed, one could go back to the participant's thoughts and cognitive processes. Although NLP is widely used in many disciplines (communication, psychology, psychotherapy, marketing, etc), to date, few scientific studies have examined the validity of this theory. Using eye tracking, this study explores one of the hypotheses of this theory, which is one of the pillars of NLP on visual language. We created a protocol based on a series of questions of different types (supposed to engage different brain areas) and we recorded by eye tracking the gaze movements at the end of each question while the participants were thinking and elaborating on the answer. Our results show that 1) complex questions elicit significantly more eye movements than control questions that necessitate little reflection, 2) the movements are not random but are oriented in selected directions, according to the different question types, 3) the orientations observed are not those predicted by the NLP theory. This pilot experiment paves the way for further investigations to decipher the close links between eye movements and neural network activities in the brain.
Purpose: To assess optical and motor changes associated with near vision reading under different controlled lighting conditions performed with two different types of electronic screens. Methods: Twenty-four healthy subjects with a mean age of 22.9±2.3 years (18- 33) participated in this study. An iPad and an e-ink reader were chosen to present calibrated text, and each task lasted 5 minutes evaluating both ambient illuminance level and luminance of the screens. Results: Eye-tracker data revealed a higher number of saccadic eye movements under minimum luminance than under maximum luminance. The results showed statistically significant differences between the iPad (p=0.016) and the e-ink reader (p=0.002). The length of saccades was also higher for the minimum luminance level for both devices: 6.2±2.8 mm and 8.2±4.2 mm (e-ink max vs min), 6.8±2.9 mm and 7.6±3.6 mm (iPad max vs min), and blinking rate increased significantly for lower lighting conditions. Conclusions: Performing reading tasks on electronic devices is highly influenced by both the configuration of the screens and the ambient lighting, meanwhile, low differences in visual quality that are transient in healthy young people, were found.