Pub Date : 2023-03-31eCollection Date: 2022-01-01DOI: 10.16910/jemr.15.3.5
Julia Beitner, Jason Helbing, Dejan Draschkow, Erwan J David, Melissa L-H Võ
Image inversion is a powerful tool for investigating cognitive mechanisms of visual perception. However, studies have mainly used inversion in paradigms presented on twodimensional computer screens. It remains open whether disruptive effects of inversion also hold true in more naturalistic scenarios. In our study, we used scene inversion in virtual reality in combination with eye tracking to investigate the mechanisms of repeated visual search through three-dimensional immersive indoor scenes. Scene inversion affected all gaze and head measures except fixation durations and saccade amplitudes. Our behavioral results, surprisingly, did not entirely follow as hypothesized: While search efficiency dropped significantly in inverted scenes, participants did not utilize more memory as measured by search time slopes. This indicates that despite the disruption, participants did not try to compensate the increased difficulty by using more memory. Our study highlights the importance of investigating classical experimental paradigms in more naturalistic scenarios to advance research on daily human behavior.
{"title":"Flipping the world upside down: Using eye tracking in virtual reality to study visual search in inverted scenes.","authors":"Julia Beitner, Jason Helbing, Dejan Draschkow, Erwan J David, Melissa L-H Võ","doi":"10.16910/jemr.15.3.5","DOIUrl":"10.16910/jemr.15.3.5","url":null,"abstract":"<p><p>Image inversion is a powerful tool for investigating cognitive mechanisms of visual perception. However, studies have mainly used inversion in paradigms presented on twodimensional computer screens. It remains open whether disruptive effects of inversion also hold true in more naturalistic scenarios. In our study, we used scene inversion in virtual reality in combination with eye tracking to investigate the mechanisms of repeated visual search through three-dimensional immersive indoor scenes. Scene inversion affected all gaze and head measures except fixation durations and saccade amplitudes. Our behavioral results, surprisingly, did not entirely follow as hypothesized: While search efficiency dropped significantly in inverted scenes, participants did not utilize more memory as measured by search time slopes. This indicates that despite the disruption, participants did not try to compensate the increased difficulty by using more memory. Our study highlights the importance of investigating classical experimental paradigms in more naturalistic scenarios to advance research on daily human behavior.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"15 3","pages":""},"PeriodicalIF":1.3,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10195094/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9503890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-20eCollection Date: 2023-01-01DOI: 10.16910/jemr.16.1.3
Naila Ayala, Abdullah Zafar, Suzanne Kearns, Elizabeth Irving, Shi Cao, Ewa Niechwiej-Szwedo
Eye movements have been used to examine the cognitive function of pilots and understand how information processing abilities impact performance. Traditional and advanced measures of gaze behaviour effectively reflect changes in cognitive load, situational awareness, and expert-novice differences. However, the extent to which gaze behaviour changes during the early stages of skill development has yet to be addressed. The current study investigated the impact of task difficulty on gaze behaviour in low-time pilots (N=18) while they completed simulated landing scenarios. An increase in task difficulty resulted in longer fixation of the runway, and a reduction in the stationary gaze entropy (gaze dispersion) and gaze transition entropy (sequence complexity). These findings suggest that pilots' gaze became less complex and more focused on fewer areas of interest when task difficulty increased. Additionally, a novel approach to identify and track instances when pilots restrict their attention outside the cockpit (i.e., gaze tunneling) was explored and shown to be sensitive to changes in task difficulty. Altogether, the gaze-related metrics used in the present study provide valuable information for assessing pilots gaze behaviour and help further understand how gaze contributes to better performance in low-time pilots.
{"title":"The effects of task difficulty on gaze behaviour during landing with visual flight rules in low-time pilots.","authors":"Naila Ayala, Abdullah Zafar, Suzanne Kearns, Elizabeth Irving, Shi Cao, Ewa Niechwiej-Szwedo","doi":"10.16910/jemr.16.1.3","DOIUrl":"10.16910/jemr.16.1.3","url":null,"abstract":"<p><p>Eye movements have been used to examine the cognitive function of pilots and understand how information processing abilities impact performance. Traditional and advanced measures of gaze behaviour effectively reflect changes in cognitive load, situational awareness, and expert-novice differences. However, the extent to which gaze behaviour changes during the early stages of skill development has yet to be addressed. The current study investigated the impact of task difficulty on gaze behaviour in low-time pilots (N=18) while they completed simulated landing scenarios. An increase in task difficulty resulted in longer fixation of the runway, and a reduction in the stationary gaze entropy (gaze dispersion) and gaze transition entropy (sequence complexity). These findings suggest that pilots' gaze became less complex and more focused on fewer areas of interest when task difficulty increased. Additionally, a novel approach to identify and track instances when pilots restrict their attention outside the cockpit (i.e., gaze tunneling) was explored and shown to be sensitive to changes in task difficulty. Altogether, the gaze-related metrics used in the present study provide valuable information for assessing pilots gaze behaviour and help further understand how gaze contributes to better performance in low-time pilots.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"1 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10643002/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43088895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16eCollection Date: 2023-01-01DOI: 10.16910/jemr.16.1.1
Andrea Schneider, Beat Vollenwyder, Eva Krueger, Céline Mühlethaler, Dave B Miller, Jasmin Thurau, Achim Elfering
Train stations have increasingly become crowded, necessitating stringent requirements in the design of stations and commuter navigation through these stations. In this study, we explored the use of mobile eye tracking in combination with observation and a survey to gain knowledge on customer experience in a crowded train station. We investigated the utilization of mobile eye tracking in ascertaining customers’ perception of the train station environment and analyzed the effect of a signalization prototype (visual pedestrian flow cues), which was intended for regulating pedestrian flow in a crowded underground passage. Gaze behavior, estimated crowd density, and comfort levels (an individual’s comfort level in a certain situation), were measured before and after the implementation of the prototype. The results revealed that the prototype was visible in conditions of low crowd density. However, in conditions of high crowd density, the prototype was less visible, and the path choice was influenced by other commuters. Hence, herd behavior appeared to have a stronger effect than the implemented signalization prototype in conditions of high crowd density. Thus, mobile eye tracking in combination with observation and the survey successfully aided in understanding customers’ perception of the train station environment on a qualitative level and supported the evaluation of the signalization prototype the crowded underground passage. However, the analysis process was laborious, which could be an obstacle for its practical use in gaining customer insights.
{"title":"Mobile eye tracking applied as a tool for customer experience research in a crowded train station.","authors":"Andrea Schneider, Beat Vollenwyder, Eva Krueger, Céline Mühlethaler, Dave B Miller, Jasmin Thurau, Achim Elfering","doi":"10.16910/jemr.16.1.1","DOIUrl":"10.16910/jemr.16.1.1","url":null,"abstract":"Train stations have increasingly become crowded, necessitating stringent requirements in the design of stations and commuter navigation through these stations. In this study, we explored the use of mobile eye tracking in combination with observation and a survey to gain knowledge on customer experience in a crowded train station. We investigated the utilization of mobile eye tracking in ascertaining customers’ perception of the train station environment and analyzed the effect of a signalization prototype (visual pedestrian flow cues), which was intended for regulating pedestrian flow in a crowded underground passage. Gaze behavior, estimated crowd density, and comfort levels (an individual’s comfort level in a certain situation), were measured before and after the implementation of the prototype. The results revealed that the prototype was visible in conditions of low crowd density. However, in conditions of high crowd density, the prototype was less visible, and the path choice was influenced by other commuters. Hence, herd behavior appeared to have a stronger effect than the implemented signalization prototype in conditions of high crowd density. Thus, mobile eye tracking in combination with observation and the survey successfully aided in understanding customers’ perception of the train station environment on a qualitative level and supported the evaluation of the signalization prototype the crowded underground passage. However, the analysis process was laborious, which could be an obstacle for its practical use in gaining customer insights.","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"1 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10624146/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67599082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-30eCollection Date: 2022-01-01DOI: 10.16910/jemr.15.3.4
Piotr Słowiński, Ben Grindley, Helen Muncie, David J Harris, Samuel J Vine, Mark R Wilson
We study an individual's propensity for rational thinking; the avoidance of cognitive biases (unconscious errors generated by our mental simplification methods) using a novel augmented reality (AR) platform. Specifically, we developed an odd-one-out (OOO) game-like task in AR designed to try to induce and assess confirmatory biases. Forty students completed the AR task in the laboratory, and the short form of the comprehensive assessment of rational thinking (CART) online via the Qualtrics platform. We demonstrate that behavioural markers (based on eye, hand and head movements) can be associated (linear regression) with the short CART score - more rational thinkers have slower head and hand movements and faster gaze movements in the second more ambiguous round of the OOO task. Furthermore, short CART scores can be associated with the change in behaviour between two rounds of the OOO task (one less and one more ambiguous) - hand-eye-head coordination patterns of the more rational thinkers are more consistent in the two rounds. Overall, we demonstrate the benefits of augmenting eye-tracking recordings with additional data modalities when trying to understand complicated behaviours.
我们利用新颖的增强现实(AR)平台来研究个体的理性思维倾向;避免认知偏差(由我们的心理简化方法产生的无意识错误)。具体来说,我们在 AR 中开发了一个类似于奇数淘汰(OOO)游戏的任务,旨在尝试诱导和评估确认性偏差。40 名学生在实验室完成了 AR 任务,并通过 Qualtrics 平台在线完成了理性思维综合评估 (CART) 的简表。我们证明,行为标记(基于眼、手和头部运动)可以与简表 CART 分数相关联(线性回归)--在第二轮模棱两可的 OOO 任务中,理性思维更强的人头部和手部运动更慢,目光运动更快。此外,短 CART 分数还与两轮 OOO 任务(一轮模糊性较低,一轮模糊性较高)之间的行为变化有关--理性思维较强的人在两轮任务中的手眼头协调模式更加一致。总之,我们证明了在试图理解复杂行为时,使用其他数据模式来增强眼动跟踪记录的益处。
{"title":"Assessment of cognitive biases in Augmented Reality: Beyond eye tracking.","authors":"Piotr Słowiński, Ben Grindley, Helen Muncie, David J Harris, Samuel J Vine, Mark R Wilson","doi":"10.16910/jemr.15.3.4","DOIUrl":"10.16910/jemr.15.3.4","url":null,"abstract":"<p><p>We study an individual's propensity for rational thinking; the avoidance of cognitive biases (unconscious errors generated by our mental simplification methods) using a novel augmented reality (AR) platform. Specifically, we developed an odd-one-out (OOO) game-like task in AR designed to try to induce and assess confirmatory biases. Forty students completed the AR task in the laboratory, and the short form of the comprehensive assessment of rational thinking (CART) online via the Qualtrics platform. We demonstrate that behavioural markers (based on eye, hand and head movements) can be associated (linear regression) with the short CART score - more rational thinkers have slower head and hand movements and faster gaze movements in the second more ambiguous round of the OOO task. Furthermore, short CART scores can be associated with the change in behaviour between two rounds of the OOO task (one less and one more ambiguous) - hand-eye-head coordination patterns of the more rational thinkers are more consistent in the two rounds. Overall, we demonstrate the benefits of augmenting eye-tracking recordings with additional data modalities when trying to understand complicated behaviours.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"15 3","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10171922/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9522622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-30eCollection Date: 2022-01-01DOI: 10.16910/jemr.15.5.6
Jason R Nezvadovitz, Hrishikesh M Rao
Electrooculography (EOG) is the measurement of eye movements using surface electrodes adhered around the eye. EOG systems can be designed to have an unobtrusive form-factor that is ideal for eye tracking in free-living over long durations, but the relationship between voltage and gaze direction requires frequent re-calibration as the skin-electrode impedance and retinal adaptation vary over time. Here we propose a method for automatically calibrating the EOG-gaze relationship by fusing EOG signals with gyroscopic measurements of head movement whenever the vestibulo-ocular reflex (VOR) is active. The fusion is executed as recursive inference on a hidden Markov model that accounts for all rotational degrees-of-freedom and uncertainties simultaneously. This enables continual calibration using natural eye and head movements while minimizing the impact of sensor noise. No external devices like monitors or cameras are needed. On average, our method's gaze estimates deviate by 3.54° from those of an industry-standard desktop video-based eye tracker. Such discrepancy is on par with the latest mobile video eye trackers. Future work is focused on automatically detecting moments of VOR in free-living.
{"title":"Using Natural Head Movements to Continually Calibrate EOG Signals.","authors":"Jason R Nezvadovitz, Hrishikesh M Rao","doi":"10.16910/jemr.15.5.6","DOIUrl":"https://doi.org/10.16910/jemr.15.5.6","url":null,"abstract":"<p><p>Electrooculography (EOG) is the measurement of eye movements using surface electrodes adhered around the eye. EOG systems can be designed to have an unobtrusive form-factor that is ideal for eye tracking in free-living over long durations, but the relationship between voltage and gaze direction requires frequent re-calibration as the skin-electrode impedance and retinal adaptation vary over time. Here we propose a method for automatically calibrating the EOG-gaze relationship by fusing EOG signals with gyroscopic measurements of head movement whenever the vestibulo-ocular reflex (VOR) is active. The fusion is executed as recursive inference on a hidden Markov model that accounts for all rotational degrees-of-freedom and uncertainties simultaneously. This enables continual calibration using natural eye and head movements while minimizing the impact of sensor noise. No external devices like monitors or cameras are needed. On average, our method's gaze estimates deviate by 3.54° from those of an industry-standard desktop video-based eye tracker. Such discrepancy is on par with the latest mobile video eye trackers. Future work is focused on automatically detecting moments of VOR in free-living.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"15 5","pages":""},"PeriodicalIF":2.1,"publicationDate":"2022-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10576893/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41235877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-01eCollection Date: 2022-01-01DOI: 10.16910/jemr.15.5.3
Xuling Li, Man Zeng, Lei Gao, Shan Li, Zibei Niu, Danhui Wang, Tianzhi Li, Xuejun Bai, Xiaolei Gao
Two eye-tracking experiments were used to investigate the mechanism of word satiation in Tibetan reading. The results revealed that, at a low repetition level, gaze duration and total fixation duration in the semantically unrelated condition were significantly longer than in the semantically related condition; at a medium repetition level, reaction time in the semantically related condition was significantly longer than in the semantically unrelated condition; at a high repetition level, the total fixation duration and reaction time in the semantically related condition were significantly longer than in the semantically unrelated condition. However, fixation duration and reaction time showed no significant difference between the similar and dissimilar orthography at any repetition level. These findings imply that there are semantic priming effects in Tibetan reading at a low repetition level, but semantic satiation effects at greater repetition levels, which occur in the late stage of lexical processing.
{"title":"The Mechanism of Word Satiation in Tibetan Reading: Evidence from Eye Movements.","authors":"Xuling Li, Man Zeng, Lei Gao, Shan Li, Zibei Niu, Danhui Wang, Tianzhi Li, Xuejun Bai, Xiaolei Gao","doi":"10.16910/jemr.15.5.3","DOIUrl":"https://doi.org/10.16910/jemr.15.5.3","url":null,"abstract":"<p><p>Two eye-tracking experiments were used to investigate the mechanism of word satiation in Tibetan reading. The results revealed that, at a low repetition level, gaze duration and total fixation duration in the semantically unrelated condition were significantly longer than in the semantically related condition; at a medium repetition level, reaction time in the semantically related condition was significantly longer than in the semantically unrelated condition; at a high repetition level, the total fixation duration and reaction time in the semantically related condition were significantly longer than in the semantically unrelated condition. However, fixation duration and reaction time showed no significant difference between the similar and dissimilar orthography at any repetition level. These findings imply that there are semantic priming effects in Tibetan reading at a low repetition level, but semantic satiation effects at greater repetition levels, which occur in the late stage of lexical processing.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"15 5","pages":""},"PeriodicalIF":2.1,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10541290/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41122844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-13eCollection Date: 2022-01-01DOI: 10.16910/jemr.15.4.3
Andrea Strandberg, Mattias Nilsson, Per Östberg, Gustaf Öqvist Seimyr
The characteristics of children's eye movements during reading change as they gradually become better readers. However, few eye tracking studies have investigated children's reading and reading development and little is known about the relationship between reading- related eye movement measures and reading assessment outcomes. We recorded and analyzed three basic eye movement measures in an ecologically valid eye-tracking set-up. The participants were Swedish children (n = 2876) who were recorded in their normal school environment. The relationship between eye movements and reading assessment outcomes was analyzed in using linear mixed effects models. We found similar age-related changes in eye movement characteristics as established in previous studies, and that eye movements seem to correlate with reading outcome measures. Additionally, our results show that eye movements predict the results on several tests from a word reading assessment. Hence eye tracking may potentially be a useful tool in assessing reading development.
{"title":"Eye Movements during Reading and their Relationship to Reading Assessment Outcomes in Swedish Elementary School Children.","authors":"Andrea Strandberg, Mattias Nilsson, Per Östberg, Gustaf Öqvist Seimyr","doi":"10.16910/jemr.15.4.3","DOIUrl":"10.16910/jemr.15.4.3","url":null,"abstract":"<p><p>The characteristics of children's eye movements during reading change as they gradually become better readers. However, few eye tracking studies have investigated children's reading and reading development and little is known about the relationship between reading- related eye movement measures and reading assessment outcomes. We recorded and analyzed three basic eye movement measures in an ecologically valid eye-tracking set-up. The participants were Swedish children (n = 2876) who were recorded in their normal school environment. The relationship between eye movements and reading assessment outcomes was analyzed in using linear mixed effects models. We found similar age-related changes in eye movement characteristics as established in previous studies, and that eye movements seem to correlate with reading outcome measures. Additionally, our results show that eye movements predict the results on several tests from a word reading assessment. Hence eye tracking may potentially be a useful tool in assessing reading development.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"15 4","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10205180/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9527041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-07eCollection Date: 2022-01-01DOI: 10.16910/jemr.15.3.3
Immo Schuetz, Katja Fiehler
A growing number of virtual reality devices now include eye tracking technology, which can facilitate oculomotor and cognitive research in VR and enable use cases like foveated rendering. These applications require different tracking performance, often measured as spatial accuracy and precision. While manufacturers report data quality estimates for their devices, these typically represent ideal performance and may not reflect real-world data quality. Additionally, it is unclear how accuracy and precision change across sessions within the same participant or between devices, and how performance is influenced by vision correction. Here, we measured spatial accuracy and precision of the Vive Pro Eye built-in eye tracker across a range of 30 visual degrees horizontally and vertically. Participants completed ten measurement sessions over multiple days, allowing to evaluate calibration reliability. Accuracy and precision were highest for central gaze and decreased with greater eccentricity in both axes. Calibration was successful in all participants, including those wearing contacts or glasses, but glasses yielded significantly lower performance. We further found differences in accuracy (but not precision) between two Vive Pro Eye headsets, and estimated participants' inter-pupillary distance. Our metrics suggest high calibration reliability and can serve as a baseline for expected eye tracking performance in VR experiments.
现在,越来越多的虚拟现实设备都包含眼球跟踪技术,这有助于在虚拟现实中开展眼球运动和认知研究,并实现诸如有焦点渲染等用例。这些应用需要不同的跟踪性能,通常以空间准确度和精确度来衡量。虽然制造商会报告其设备的数据质量估计值,但这些估计值通常代表理想性能,可能无法反映真实世界的数据质量。此外,准确度和精确度在同一参与者或不同设备之间的不同过程中如何变化,以及视力矫正对性能的影响如何,目前尚不清楚。在此,我们测量了 Vive Pro Eye 内置眼动追踪器在水平和垂直方向 30 可视度范围内的空间准确度和精确度。参与者在多天内完成了十次测量,从而评估了校准的可靠性。中心注视的准确度和精确度最高,随着两个坐标轴偏心率的增加,准确度和精确度均有所下降。所有参与者,包括戴隐形眼镜或眼镜的人,都能成功进行校准,但戴眼镜的人校准效果明显较差。我们进一步发现了两款 Vive Pro Eye 头显在准确度(而非精确度)上的差异,并估算了参与者的瞳间距离。我们的指标表明校准可靠性很高,可以作为 VR 实验中预期眼动追踪性能的基准。
{"title":"Eye Tracking in Virtual Reality: Vive Pro Eye Spatial Accuracy, Precision, and Calibration Reliability.","authors":"Immo Schuetz, Katja Fiehler","doi":"10.16910/jemr.15.3.3","DOIUrl":"10.16910/jemr.15.3.3","url":null,"abstract":"<p><p>A growing number of virtual reality devices now include eye tracking technology, which can facilitate oculomotor and cognitive research in VR and enable use cases like foveated rendering. These applications require different tracking performance, often measured as spatial accuracy and precision. While manufacturers report data quality estimates for their devices, these typically represent ideal performance and may not reflect real-world data quality. Additionally, it is unclear how accuracy and precision change across sessions within the same participant or between devices, and how performance is influenced by vision correction. Here, we measured spatial accuracy and precision of the Vive Pro Eye built-in eye tracker across a range of 30 visual degrees horizontally and vertically. Participants completed ten measurement sessions over multiple days, allowing to evaluate calibration reliability. Accuracy and precision were highest for central gaze and decreased with greater eccentricity in both axes. Calibration was successful in all participants, including those wearing contacts or glasses, but glasses yielded significantly lower performance. We further found differences in accuracy (but not precision) between two Vive Pro Eye headsets, and estimated participants' inter-pupillary distance. Our metrics suggest high calibration reliability and can serve as a baseline for expected eye tracking performance in VR experiments.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"15 3","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10136368/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9450963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-14eCollection Date: 2022-01-01DOI: 10.16910/jemr.15.1.1
Norberto Pereira, Maria Armanda Costa, Manuela Guerreiro
This study aimed to investigate the neuropsycholinguistic functioning of children with Developmental Dyslexia (DD) and Attention-Deficit/Hyperactivity Disorder - inattentive subtype (ADHD-I) in a reading task. The psycholinguistic profile of both groups was assessed using a battery of neuropsychological and linguistic tests and compared to typical readers. Participants were submitted to a silent reading task with lexical manipulation of the text. Eye movements were recorded and compared aiming to find cognitive processes involved in reading that could help differentiate groups. The study examined whether word-frequency and word-length effects distinguish between groups. Participants included 19 typical readers, 21 children diagnosed with ADHD-I and 19 children with DD. All participants were attending 4th grade and had a mean age of 9.08 years. Children with DD and ADHDI exhibited significant different cognitive and linguistic profiles on almost all measures evaluated when compared to typical readers. The effects of word length and word frequency interaction also differed significantly in the 3 experimental groups. The results support the multiple cognitive deficits theory. While the shared deficits support the evidence of a phonological disorder present in both conditions, the specific ones corroborate the hypothesis of an oculomotor dysfunction in DD and a visuo-spatial attention dysfunction in ADHD.
{"title":"Effects of word length and word frequency among dyslexic, ADHD-I and typical readers.","authors":"Norberto Pereira, Maria Armanda Costa, Manuela Guerreiro","doi":"10.16910/jemr.15.1.1","DOIUrl":"10.16910/jemr.15.1.1","url":null,"abstract":"<p><p>This study aimed to investigate the neuropsycholinguistic functioning of children with Developmental Dyslexia (DD) and Attention-Deficit/Hyperactivity Disorder - inattentive subtype (ADHD-I) in a reading task. The psycholinguistic profile of both groups was assessed using a battery of neuropsychological and linguistic tests and compared to typical readers. Participants were submitted to a silent reading task with lexical manipulation of the text. Eye movements were recorded and compared aiming to find cognitive processes involved in reading that could help differentiate groups. The study examined whether word-frequency and word-length effects distinguish between groups. Participants included 19 typical readers, 21 children diagnosed with ADHD-I and 19 children with DD. All participants were attending 4<sup>th</sup> grade and had a mean age of 9.08 years. Children with DD and ADHDI exhibited significant different cognitive and linguistic profiles on almost all measures evaluated when compared to typical readers. The effects of word length and word frequency interaction also differed significantly in the 3 experimental groups. The results support the multiple cognitive deficits theory. While the shared deficits support the evidence of a phonological disorder present in both conditions, the specific ones corroborate the hypothesis of an oculomotor dysfunction in DD and a visuo-spatial attention dysfunction in ADHD.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"15 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10063363/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10299611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-visual eye-movements (NVEMs) are eye movements that do not serve the provision of visual information. As of yet, their cognitive origins and meaning remain under-explored in eye-movement research. The first problem presenting itself in pursuit of their study is one of annotation: in virtue of their being non-visual, they are not necessarily bound to a specific surface or object of interest, rendering conventional eye-trackers nonideal for their study. This, however, makes it potentially viable to investigate them without requiring high resolution data. In this report, we present two approaches to annotating NVEM data – one of them grid-based, involving manual annotation in ELAN (18), the other one Cartesian coordinate-based, derived algorithmically through OpenFace (1). We evaluated a) the two approaches in themselves, e.g. in terms of consistency, as well as b) their compatibility, i.e. the possibilities of mapping one to the other. In the case of a), we found good overall consistency in both approaches, in the case of b), there is evidence for the eventual possibility of mapping the OpenFace gaze estimations onto the manual coding grid.
{"title":"Investigating Non-Visual Eye Movements Non-Intrusively: Comparing Manual and Automatic Annotation Styles","authors":"Jeremias Stüber, Lina Junctorius, A. Hohenberger","doi":"10.16910/jemr.15.2.1","DOIUrl":"https://doi.org/10.16910/jemr.15.2.1","url":null,"abstract":"Non-visual eye-movements (NVEMs) are eye movements that do not serve the provision of visual information. As of yet, their cognitive origins and meaning remain under-explored in eye-movement research. The first problem presenting itself in pursuit of their study is one of annotation: in virtue of their being non-visual, they are not necessarily bound to a specific surface or object of interest, rendering conventional eye-trackers nonideal for their study. This, however, makes it potentially viable to investigate them without requiring high resolution data. In this report, we present two approaches to annotating NVEM data – one of them grid-based, involving manual annotation in ELAN (18), the other one Cartesian coordinate-based, derived algorithmically through OpenFace (1). We evaluated a) the two approaches in themselves, e.g. in terms of consistency, as well as b) their compatibility, i.e. the possibilities of mapping one to the other. In the case of a), we found good overall consistency in both approaches, in the case of b), there is evidence for the eventual possibility of mapping the OpenFace gaze estimations onto the manual coding grid.","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2022-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42439636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}