Pub Date : 2021-09-14eCollection Date: 2020-01-01DOI: 10.16910/jemr.13.3.5
Judith Beck, Lars Konieczny
The present study investigates effects of conventionally metered and rhymed poetry on eyemovements in silent reading. Readers saw MRRL poems (i.e., metrically regular, rhymed language) in two layouts. In poem layout, verse endings coincided with line breaks. In prose layout verse endings could be mid-line. We also added metrical and rhyme anomalies. We hypothesized that silently reading MRRL results in building up auditive expectations that are based on a rhythmic "audible gestalt" and propose that rhythmicity is generated through subvocalization. Our results revealed that readers were sensitive to rhythmic-gestalt-anomalies but showed differential effects in poem and prose layouts. Metrical anomalies in particular resulted in robust reading disruptions across a variety of eye-movement measures in the poem layout and caused re-reading of the local context. Rhyme anomalies elicited stronger effects in prose layout and resulted in systematic re-reading of pre-rhymes. The presence or absence of rhythmic-gestalt-anomalies, as well as the layout manipulation, also affected reading in general. Effects of syllable number indicated a high degree of subvocalization. The overall pattern of results suggests that eye-movements reflect, and are closely aligned with, the rhythmic subvocalization of MRRL. This study introduces a two-stage approach to the analysis of long MRRL stimuli and contributes to the discussion of how the processing of rhythm in music and speech may overlap.
{"title":"Rhythmic subvocalization: An eye-tracking study on silent poetry reading.","authors":"Judith Beck, Lars Konieczny","doi":"10.16910/jemr.13.3.5","DOIUrl":"https://doi.org/10.16910/jemr.13.3.5","url":null,"abstract":"<p><p>The present study investigates effects of conventionally metered and rhymed poetry on eyemovements in silent reading. Readers saw MRRL poems (i.e., metrically regular, rhymed language) in two layouts. In poem layout, verse endings coincided with line breaks. In prose layout verse endings could be mid-line. We also added metrical and rhyme anomalies. We hypothesized that silently reading MRRL results in building up auditive expectations that are based on a rhythmic \"audible gestalt\" and propose that rhythmicity is generated through subvocalization. Our results revealed that readers were sensitive to rhythmic-gestalt-anomalies but showed differential effects in poem and prose layouts. Metrical anomalies in particular resulted in robust reading disruptions across a variety of eye-movement measures in the poem layout and caused re-reading of the local context. Rhyme anomalies elicited stronger effects in prose layout and resulted in systematic re-reading of pre-rhymes. The presence or absence of rhythmic-gestalt-anomalies, as well as the layout manipulation, also affected reading in general. Effects of syllable number indicated a high degree of subvocalization. The overall pattern of results suggests that eye-movements reflect, and are closely aligned with, the rhythmic subvocalization of MRRL. This study introduces a two-stage approach to the analysis of long MRRL stimuli and contributes to the discussion of how the processing of rhythm in music and speech may overlap.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"13 3","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8557949/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39585512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-14eCollection Date: 2021-01-01DOI: 10.16910/jemr.14.4.1
Ondřeji Straka, Šárka Portešová, Daniela Halámková, Michal Jabůrek
In this paper, we inquire into possible differences between children with exceptionally high intellectual abilities and their average peers as regards metacognitive monitoring and related metacognitive strategies. The question whether gifted children surpass their typically developing peers not only in the intellectual abilities, but also in their level of metacognitive skills, has not been convincingly answered so far. We sought to examine the indicators of metacognitive behavior by means of eye-tracking technology and to compare these findings with the participants' subjective confidence ratings. Eye-movement data of gifted and average students attending final grades of primary school (4th and 5th grades) were recorded while they dealt with a deductive reasoning task, and four metrics supposed to bear on metacognitive skills, namely the overall trial duration, mean fixation duration, number of regressions and normalized gaze transition entropy, were analyzed. No significant differences between gifted and average children were found in the normalized gaze transition entropy, in mean fixation duration, nor - after controlling for the trial duration - in number of regressions. Both groups of children differed in the time devoted to solving the task. Both groups significantly differed in the association between time devoted to the task and the participants' subjective confidence rating, where only the gifted children tended to devote more time when they felt less confident. Several implications of these findings are discussed.
{"title":"Metacognitive monitoring and metacognitive strategies of gifted and average children on dealing with deductive reasoning task.","authors":"Ondřeji Straka, Šárka Portešová, Daniela Halámková, Michal Jabůrek","doi":"10.16910/jemr.14.4.1","DOIUrl":"https://doi.org/10.16910/jemr.14.4.1","url":null,"abstract":"<p><p>In this paper, we inquire into possible differences between children with exceptionally high intellectual abilities and their average peers as regards metacognitive monitoring and related metacognitive strategies. The question whether gifted children surpass their typically developing peers not only in the intellectual abilities, but also in their level of metacognitive skills, has not been convincingly answered so far. We sought to examine the indicators of metacognitive behavior by means of eye-tracking technology and to compare these findings with the participants' subjective confidence ratings. Eye-movement data of gifted and average students attending final grades of primary school (4th and 5th grades) were recorded while they dealt with a deductive reasoning task, and four metrics supposed to bear on metacognitive skills, namely the overall trial duration, mean fixation duration, number of regressions and normalized gaze transition entropy, were analyzed. No significant differences between gifted and average children were found in the normalized gaze transition entropy, in mean fixation duration, nor - after controlling for the trial duration - in number of regressions. Both groups of children differed in the time devoted to solving the task. Both groups significantly differed in the association between time devoted to the task and the participants' subjective confidence rating, where only the gifted children tended to devote more time when they felt less confident. Several implications of these findings are discussed.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 4","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8559419/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39585514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigated reading comprehension, reading speed, and the quality of eye movements while reading on an iPad, as compared to printed text. 31 visually-normal subjects were enrolled. Two of the passages were read from the Visagraph standardized text on iPad and Print. Eye movement characteristics and comprehension were evaluated. Mean (SD) fixation duration was significantly longer with the iPad at 270 ms (40) compared to the printed text (p=0.04) at 260 ms (40). Subjects’ mean reading rates were significantly lower on the iPad at 294 words per minute (wpm) than the printed text at 318 wpm (p=0.03). The mean (SD) overall reading duration was significantly (p=0.02) slower on the iPad that took 31 s (9.3) than the printed text at 28 s (8.0). Overall reading performance is lower with an iPad than printed text in normal individuals. These findings might be more consequential in children and adult slower readers when they read using iPads.
这项研究调查了在iPad上阅读时的阅读理解、阅读速度和眼球运动质量,并将其与印刷文本进行了比较。纳入31名视力正常的受试者。其中两篇文章是在iPad和Print上阅读Visagraph标准文本。评估眼动特征和理解能力。iPad的平均(SD)注视时间为270 ms(40),显著长于印刷文本的260 ms (40) (p=0.04)。受试者在iPad上的平均阅读速度为每分钟294个单词,显著低于印刷文本的每分钟318个单词(p=0.03)。iPad上的平均(SD)整体阅读时间(31秒,9.3秒)明显慢于印刷文本的28秒(8.0秒)(p=0.02)。使用iPad的整体阅读表现低于正常人的打印文本。这些发现可能对使用ipad阅读速度较慢的儿童和成人更为重要。
{"title":"Reading Eye Movements Performance on iPad vs Print Using a Visagraph.","authors":"Alicia Feis, Amanda Lallensack, Elizabeth Pallante, Melanie Nielsen, Nicole Demarco, Balamurali Vasudevan","doi":"10.16910/jemr.14.2.6","DOIUrl":"https://doi.org/10.16910/jemr.14.2.6","url":null,"abstract":"This study investigated reading comprehension, reading speed, and the quality of eye movements while reading on an iPad, as compared to printed text. 31 visually-normal subjects were enrolled. Two of the passages were read from the Visagraph standardized text on iPad and Print. Eye movement characteristics and comprehension were evaluated. Mean (SD) fixation duration was significantly longer with the iPad at 270 ms (40) compared to the printed text (p=0.04) at 260 ms (40). Subjects’ mean reading rates were significantly lower on the iPad at 294 words per minute (wpm) than the printed text at 318 wpm (p=0.03). The mean (SD) overall reading duration was significantly (p=0.02) slower on the iPad that took 31 s (9.3) than the printed text at 28 s (8.0). Overall reading performance is lower with an iPad than printed text in normal individuals. These findings might be more consequential in children and adult slower readers when they read using iPads.","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 2","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8557948/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39585513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-31eCollection Date: 2021-01-01DOI: 10.16910/jemr.14.4.2
L R D Murthy, Siddhi Brahmbhatt, Somnath Arjun, Pradipta Biswas
Gaze estimation problem can be addressed using either model-based or appearance-based approaches. Model-based approaches rely on features extracted from eye images to fit a 3D eye-ball model to obtain gaze point estimate while appearance-based methods attempt to directly map captured eye images to gaze point without any handcrafted features. Recently, availability of large datasets and novel deep learning techniques made appearance-based methods achieve superior accuracy than model-based approaches. However, many appearance- based gaze estimation systems perform well in within-dataset validation but fail to provide the same degree of accuracy in cross-dataset evaluation. Hence, it is still unclear how well the current state-of-the-art approaches perform in real-time in an interactive setting on unseen users. This paper proposes I2DNet, a novel architecture aimed to improve subject- independent gaze estimation accuracy that achieved a state-of-the-art 4.3 and 8.4 degree mean angle error on the MPIIGaze and RT-Gene datasets respectively. We have evaluated the proposed system as a gaze-controlled interface in real-time for a 9-block pointing and selection task and compared it with Webgazer.js and OpenFace 2.0. We have conducted a user study with 16 participants, and our proposed system reduces selection time and the number of missed selections statistically significantly compared to other two systems.
{"title":"I2DNet - Design and Real-Time Evaluation of Appearance-based gaze estimation system.","authors":"L R D Murthy, Siddhi Brahmbhatt, Somnath Arjun, Pradipta Biswas","doi":"10.16910/jemr.14.4.2","DOIUrl":"https://doi.org/10.16910/jemr.14.4.2","url":null,"abstract":"<p><p>Gaze estimation problem can be addressed using either model-based or appearance-based approaches. Model-based approaches rely on features extracted from eye images to fit a 3D eye-ball model to obtain gaze point estimate while appearance-based methods attempt to directly map captured eye images to gaze point without any handcrafted features. Recently, availability of large datasets and novel deep learning techniques made appearance-based methods achieve superior accuracy than model-based approaches. However, many appearance- based gaze estimation systems perform well in within-dataset validation but fail to provide the same degree of accuracy in cross-dataset evaluation. Hence, it is still unclear how well the current state-of-the-art approaches perform in real-time in an interactive setting on unseen users. This paper proposes I2DNet, a novel architecture aimed to improve subject- independent gaze estimation accuracy that achieved a state-of-the-art 4.3 and 8.4 degree mean angle error on the MPIIGaze and RT-Gene datasets respectively. We have evaluated the proposed system as a gaze-controlled interface in real-time for a 9-block pointing and selection task and compared it with Webgazer.js and OpenFace 2.0. We have conducted a user study with 16 participants, and our proposed system reduces selection time and the number of missed selections statistically significantly compared to other two systems.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 4","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8561667/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39588925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-27eCollection Date: 2020-01-01DOI: 10.16910/jemr.13.2.15
Arthur Crucq
Linear perspective has long been used to create the illusion of three-dimensional space on the picture plane. One of its central axioms comes from Euclidean geometry and holds that all parallel lines converge in a single vanishing point. Although linear perspective provided the painter with a means to organize the painting, the question is whether the gaze of the beholder is also affected by the underlying structure of linear perspective: for instance, in such a way that the orthogonals leading to the vanishing point also automatically guides the beholder's gaze. This was researched during a pilot study by means of an eye-tracking experiment at the Lab for Cognitive Research in Art History (CReA) of the University of Vienna. It appears that in some compositions the vanishing point attracts the view of the participant. This effect is more significant when the vanishing point coincides with the central vertical axis of the painting, but is even stronger when the vanishing point also coincides with a major visual feature such as an object or figure. The latter calls into question what exactly attracts the gaze of the viewer, i.e., what comes first: the geometrical construct of the vanishing point or the visual feature?
{"title":"Viewing Patterns and Perspectival Paintings: An Eye-Tracking Study on the Effect of the Vanishing Point.","authors":"Arthur Crucq","doi":"10.16910/jemr.13.2.15","DOIUrl":"https://doi.org/10.16910/jemr.13.2.15","url":null,"abstract":"<p><p>Linear perspective has long been used to create the illusion of three-dimensional space on the picture plane. One of its central axioms comes from Euclidean geometry and holds that all parallel lines converge in a single vanishing point. Although linear perspective provided the painter with a means to organize the painting, the question is whether the gaze of the beholder is also affected by the underlying structure of linear perspective: for instance, in such a way that the orthogonals leading to the vanishing point also automatically guides the beholder's gaze. This was researched during a pilot study by means of an eye-tracking experiment at the Lab for Cognitive Research in Art History (CReA) of the University of Vienna. It appears that in some compositions the vanishing point attracts the view of the participant. This effect is more significant when the vanishing point coincides with the central vertical axis of the painting, but is even stronger when the vanishing point also coincides with a major visual feature such as an object or figure. The latter calls into question what exactly attracts the gaze of the viewer, i.e., what comes first: the geometrical construct of the vanishing point or the visual feature?</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"13 2","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8524395/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39538947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-13eCollection Date: 2021-01-01DOI: 10.16910/jemr.14.2.5
Liu Xin, Zheng Bin, Duan Xiaoqin, He Wenjing, Li Yuandong, Zhao Jinyu, Zhao Chen, Wang Lin
Eye-tracking can help decode the intricate control mechanism in human performance. In healthcare, physicians-in-training require extensive practice to improve their healthcare skills. When a trainee encounters any difficulty in the practice, they will need feedback from experts to improve their performance. Personal feedback is time-consuming and subjected to bias. In this study, we tracked the eye movements of trainees during their colonoscopic performance in simulation. We examined changes in eye movement behavior during the moments of navigation loss (MNL), a signature sign for task difficulty during colonoscopy, and tested whether deep learning algorithms can detect the MNL by feeding data from eye-tracking. Human eye gaze and pupil characteristics were learned and verified by the deep convolutional generative adversarial networks (DCGANs); the generated data were fed to the Long Short-Term Memory (LSTM) networks with three different data feeding strategies to classify MNLs from the entire colonoscopic procedure. Outputs from deep learning were compared to the expert's judgment on the MNLs based on colonoscopic videos. The best classification outcome was achieved when we fed human eye data with 1000 synthesized eye data, where accuracy (91.80%), sensitivity (90.91%), and specificity (94.12%) were optimized. This study built an important foundation for our work of developing an education system for training healthcare skills using simulation.
{"title":"Detecting Task Difficulty of Learners in Colonoscopy: Evidence from Eye-Tracking.","authors":"Liu Xin, Zheng Bin, Duan Xiaoqin, He Wenjing, Li Yuandong, Zhao Jinyu, Zhao Chen, Wang Lin","doi":"10.16910/jemr.14.2.5","DOIUrl":"https://doi.org/10.16910/jemr.14.2.5","url":null,"abstract":"<p><p>Eye-tracking can help decode the intricate control mechanism in human performance. In healthcare, physicians-in-training require extensive practice to improve their healthcare skills. When a trainee encounters any difficulty in the practice, they will need feedback from experts to improve their performance. Personal feedback is time-consuming and subjected to bias. In this study, we tracked the eye movements of trainees during their colonoscopic performance in simulation. We examined changes in eye movement behavior during the moments of navigation loss (MNL), a signature sign for task difficulty during colonoscopy, and tested whether deep learning algorithms can detect the MNL by feeding data from eye-tracking. Human eye gaze and pupil characteristics were learned and verified by the deep convolutional generative adversarial networks (DCGANs); the generated data were fed to the Long Short-Term Memory (LSTM) networks with three different data feeding strategies to classify MNLs from the entire colonoscopic procedure. Outputs from deep learning were compared to the expert's judgment on the MNLs based on colonoscopic videos. The best classification outcome was achieved when we fed human eye data with 1000 synthesized eye data, where accuracy (91.80%), sensitivity (90.91%), and specificity (94.12%) were optimized. This study built an important foundation for our work of developing an education system for training healthcare skills using simulation.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 2","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8327395/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39273168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John Semmlow, Chang Yaramothu, Mitchell Scheiman, Tara L Alvarez
Introduction: Previous studies have shown that the slow, or fusion sustaining, component of disparity vergence contains oscillatory behavior as would be expected if fusion is sustained by visual feedback. This study extends the examination of this behavior to a wider range of frequencies and a larger number of subjects.
Methods: Disparity vergence responses to symmetrical 4.0 deg step changes in target position were recorded in 20 subjects. Approximately three seconds of the late component of each response were isolated using interactive graphics and the frequency spectrum calculated. Peaks in these spectra associated with oscillatory behavior were identified and examined.
Results: All subjects exhibited oscillatory behavior with fundamental frequencies ranging between 0.37 and 0.55 Hz; much lower than those identified in the earlier study. All responses showed significant higher frequency components. The relationship between higher frequency components and the fundamental frequency suggest may be harmonics. A correlation was found across subjects between the amplitude of the fundamental frequency and the maximum velocity of the fusion initiating component probably due to the gain of shared neural pathways.
Conclusion: Low frequency oscillatory behavior was found in all subjects adding support that the slow, or fusion sustaining, component is mediated by a feedback control.
{"title":"Vergence Fusion Sustaining Oscillations.","authors":"John Semmlow, Chang Yaramothu, Mitchell Scheiman, Tara L Alvarez","doi":"10.16910/jemr.14.1.4","DOIUrl":"https://doi.org/10.16910/jemr.14.1.4","url":null,"abstract":"<p><strong>Introduction: </strong>Previous studies have shown that the slow, or fusion sustaining, component of disparity vergence contains oscillatory behavior as would be expected if fusion is sustained by visual feedback. This study extends the examination of this behavior to a wider range of frequencies and a larger number of subjects.</p><p><strong>Methods: </strong>Disparity vergence responses to symmetrical 4.0 deg step changes in target position were recorded in 20 subjects. Approximately three seconds of the late component of each response were isolated using interactive graphics and the frequency spectrum calculated. Peaks in these spectra associated with oscillatory behavior were identified and examined.</p><p><strong>Results: </strong>All subjects exhibited oscillatory behavior with fundamental frequencies ranging between 0.37 and 0.55 Hz; much lower than those identified in the earlier study. All responses showed significant higher frequency components. The relationship between higher frequency components and the fundamental frequency suggest may be harmonics. A correlation was found across subjects between the amplitude of the fundamental frequency and the maximum velocity of the fusion initiating component probably due to the gain of shared neural pathways.</p><p><strong>Conclusion: </strong>Low frequency oscillatory behavior was found in all subjects adding support that the slow, or fusion sustaining, component is mediated by a feedback control.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8247062/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39149463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-11eCollection Date: 2021-01-01DOI: 10.16910/jemr.14.2.4
Benedict C O F Fehringer
The Index of Cognitive Activity (ICA) and its open-source alternative, the Index of Pupillary Activity (IPA), are pupillary-based indicators for cognitive workload and are independent of light changes. Both indicators were investigated regarding influences of cognitive demand, fatigue and inter-individual differences. In addition, the variability of pupil changes between both eyes (difference values) were compared with the usually calculated pupillary changes averaged over both eyes (mean values). Fifty-five participants performed a spatial thinking test, the R-Cube-Vis Test, with six distinct difficulty levels and a simple fixation task before and after the R-Cube-Vis Test. The distributions of the ICA and IPA were comparable. The ICA/IPA values were lower during the simple fixation tasks than during the cognitively demanding R-Cube-Vis Test. A fatigue effect was found only for the mean ICA values. The effects of both indicators were larger between difficulty levels of the test when inter-individual differences were controlled using z-standardization. The difference values seemed to control for fatigue and appeared to differentiate better between more demanding cognitive tasks than the mean values. The derived recommendations for the ICA/IPA values are beneficial to gain more insights in individual performance and behavior during, e.g., training and testing scenarios.
{"title":"Optimizing the usage of pupillary based indicators for cognitive workload.","authors":"Benedict C O F Fehringer","doi":"10.16910/jemr.14.2.4","DOIUrl":"https://doi.org/10.16910/jemr.14.2.4","url":null,"abstract":"<p><p>The Index of Cognitive Activity (ICA) and its open-source alternative, the Index of Pupillary Activity (IPA), are pupillary-based indicators for cognitive workload and are independent of light changes. Both indicators were investigated regarding influences of cognitive demand, fatigue and inter-individual differences. In addition, the variability of pupil changes between both eyes (difference values) were compared with the usually calculated pupillary changes averaged over both eyes (mean values). Fifty-five participants performed a spatial thinking test, the R-Cube-Vis Test, with six distinct difficulty levels and a simple fixation task before and after the R-Cube-Vis Test. The distributions of the ICA and IPA were comparable. The ICA/IPA values were lower during the simple fixation tasks than during the cognitively demanding R-Cube-Vis Test. A fatigue effect was found only for the mean ICA values. The effects of both indicators were larger between difficulty levels of the test when inter-individual differences were controlled using z-standardization. The difference values seemed to control for fatigue and appeared to differentiate better between more demanding cognitive tasks than the mean values. The derived recommendations for the ICA/IPA values are beneficial to gain more insights in individual performance and behavior during, e.g., training and testing scenarios.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 2","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8299071/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39220195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lee Friedman, Dillon Lohr, Timothy Hanson, Oleg V Komogortsev
Typically, the position error of an eye-tracking device is measured as the distance of the eye-position from the target position in two-dimensional space (angular offset). Accuracy is the mean angular offset. The mean is a highly interpretable measure of central tendency if the underlying error distribution is unimodal and normal. However, in the context of an underlying multimodal distribution, the mean is less interpretable. We will present evidence that the majority of such distributions are multimodal. Only 14.7% of fixation angular offset distributions were unimodal, and of these, only 11.5% were normally distributed. (Of the entire dataset, 1.7% were unimodal and normal.) This multimodality is true even if there is only a single, continuous tracking fixation segment per trial. We present several approaches to measure accuracy in the face of multimodality. We also address the role of fixation drift in partially explaining multimodality.
{"title":"Angular Offset Distributions During Fixation Are, More Often Than Not, Multimodal.","authors":"Lee Friedman, Dillon Lohr, Timothy Hanson, Oleg V Komogortsev","doi":"10.16910/jemr.14.3.2","DOIUrl":"10.16910/jemr.14.3.2","url":null,"abstract":"<p><p>Typically, the position error of an eye-tracking device is measured as the distance of the eye-position from the target position in two-dimensional space (angular offset). Accuracy is the mean angular offset. The mean is a highly interpretable measure of central tendency if the underlying error distribution is unimodal and normal. However, in the context of an underlying multimodal distribution, the mean is less interpretable. We will present evidence that the majority of such distributions are multimodal. Only 14.7% of fixation angular offset distributions were unimodal, and of these, only 11.5% were normally distributed. (Of the entire dataset, 1.7% were unimodal and normal.) This multimodality is true even if there is only a single, continuous tracking fixation segment per trial. We present several approaches to measure accuracy in the face of multimodality. We also address the role of fixation drift in partially explaining multimodality.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 3","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8189800/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39010217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
<p><p>The control of technological systems by human operators has been the object of study for many decades. The increasing complexity in the digital age has made the optimization of the interaction between system and human operator particularly necessary. In the present thematic issue, ten exemplary articles are presented, ranging from observational field studies to experimental work in highly complex navigation simulators. For the human operator, the processes of attention play a crucial role, which are captured in the contributions listed in this thematic issue by eye-tracking devices. For many decades, eye tracking during car driving has been investigated extensively (e.g. 6; 5). In the present special issue, Cvahte Ojsteršek & Topolšek (4) provide a literature review and scientometric analysis of 139 eye-tracking studies investigating driver distraction. For future studies, the authors recommend a wider variety of distractor stimuli, a larger number of tested participants, and an increasing interdisciplinarity of researchers. In addition to most studies investigating bottom-up processes of covered attention, Tuhkanen, Pekkanen, Lehtonen & Lappi (10) include the experimental control of top-down processes of overt attention in an active visuomotor steering task. The results indicate a bottom-up process of biasing the optic flow of the stimulus input in interaction with the top-down saccade planning induced by the steering task. An expanding area of technological development involves autonomous driving where actions of the human operator directly interact with the programmed reactions of the vehicle. Autonomous driving requires, however, a broader exploration of the entire visual input and less gaze directed towards the road centre. Schnebelen, Charron & Mars (9) conducted experimental research in this area and concluded that gaze dynamics played the most important role in distinguishing between manual and automated driving. Through a combination of advanced gaze tracking systems with the latest vehicle environment sensors, Bickerdt, Wendland, Geisler, Sonnenberg & Kasneci (2021) conducted a study with 50 participants in a driving simulator and propose a novel way to determine perceptual limits which are applicable to realistic driving scenarios. Eye-Computer-Interaction (ECI) is an interactive method of directly controlling a technological device by means of ocular parameters. In this context, Niu, Gao, Xue, Zhang & Yang (8) conducted two experiments to explore the optimum target size and gaze-triggering dwell time in ECI. Their results have an exemplary application value for future interface design. Aircraft training and pilot selection is commonly performed on simulators. This makes it possible to study human capabilities and their limitation in interaction with the simulated technological system. Based on their methodological developments and experimental results, Vlačić, Knežević, Mandal, Rođenkov & Vitsas (11) propose a network approach w
他们的共同目标是,在数字时代,根据人类的能力和局限性,提高技术的潜力和安全性。
{"title":"Eye movements in real and simulated driving and navigation control - Foreword to the Special Issue.","authors":"Rudolf Groner, Enkelejda Kasneci","doi":"10.16910/jemr.12.3.0","DOIUrl":"10.16910/jemr.12.3.0","url":null,"abstract":"<p><p>The control of technological systems by human operators has been the object of study for many decades. The increasing complexity in the digital age has made the optimization of the interaction between system and human operator particularly necessary. In the present thematic issue, ten exemplary articles are presented, ranging from observational field studies to experimental work in highly complex navigation simulators. For the human operator, the processes of attention play a crucial role, which are captured in the contributions listed in this thematic issue by eye-tracking devices. For many decades, eye tracking during car driving has been investigated extensively (e.g. 6; 5). In the present special issue, Cvahte Ojsteršek & Topolšek (4) provide a literature review and scientometric analysis of 139 eye-tracking studies investigating driver distraction. For future studies, the authors recommend a wider variety of distractor stimuli, a larger number of tested participants, and an increasing interdisciplinarity of researchers. In addition to most studies investigating bottom-up processes of covered attention, Tuhkanen, Pekkanen, Lehtonen & Lappi (10) include the experimental control of top-down processes of overt attention in an active visuomotor steering task. The results indicate a bottom-up process of biasing the optic flow of the stimulus input in interaction with the top-down saccade planning induced by the steering task. An expanding area of technological development involves autonomous driving where actions of the human operator directly interact with the programmed reactions of the vehicle. Autonomous driving requires, however, a broader exploration of the entire visual input and less gaze directed towards the road centre. Schnebelen, Charron & Mars (9) conducted experimental research in this area and concluded that gaze dynamics played the most important role in distinguishing between manual and automated driving. Through a combination of advanced gaze tracking systems with the latest vehicle environment sensors, Bickerdt, Wendland, Geisler, Sonnenberg & Kasneci (2021) conducted a study with 50 participants in a driving simulator and propose a novel way to determine perceptual limits which are applicable to realistic driving scenarios. Eye-Computer-Interaction (ECI) is an interactive method of directly controlling a technological device by means of ocular parameters. In this context, Niu, Gao, Xue, Zhang & Yang (8) conducted two experiments to explore the optimum target size and gaze-triggering dwell time in ECI. Their results have an exemplary application value for future interface design. Aircraft training and pilot selection is commonly performed on simulators. This makes it possible to study human capabilities and their limitation in interaction with the simulated technological system. Based on their methodological developments and experimental results, Vlačić, Knežević, Mandal, Rođenkov & Vitsas (11) propose a network approach w","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"12 3","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8182438/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}