Pub Date : 2021-10-21eCollection Date: 2021-01-01DOI: 10.16910/jemr.14.2.1
Ioannis Smyrnakis, Vassilios Andreadakis, Andriani Rina, Nadia Bοufachrentin, Ioannis M Aslanides
The main purpose of this study is to compare the silent and loud reading ability of typical and dyslexic readers, using eye-tracking technology to monitor the reading process. The participants (156 students of normal intelligence) were first divided into three groups based on their school grade, and each subgroup was then further separated into typical readers and students diagnosed with dyslexia. The students read the same text twice, one time silently and one time out loud. Various eye-tracking parameters were calculated for both types of reading. In general, the performance of the typical students was better for both modes of reading - regardless of age. In the older age groups, typical readers performed better at silent reading. The dyslexic readers in all age groups performed better at reading out loud. However, this was less prominent in secondary and upper secondary dyslexics, reflecting a slow shift towards silent reading mode as they age. Our results confirm that the eye-tracking parameters of dyslexics improve with age in both silent and loud reading, and their reading preference shifts slowly towards silent reading. Typical readers, before 4th grade do not show a clear reading mode preference, however, after that age they develop a clear preference for silent reading.
{"title":"Silent versus Reading Out Loud modes: An eye-tracking study.","authors":"Ioannis Smyrnakis, Vassilios Andreadakis, Andriani Rina, Nadia Bοufachrentin, Ioannis M Aslanides","doi":"10.16910/jemr.14.2.1","DOIUrl":"10.16910/jemr.14.2.1","url":null,"abstract":"<p><p>The main purpose of this study is to compare the silent and loud reading ability of typical and dyslexic readers, using eye-tracking technology to monitor the reading process. The participants (156 students of normal intelligence) were first divided into three groups based on their school grade, and each subgroup was then further separated into typical readers and students diagnosed with dyslexia. The students read the same text twice, one time silently and one time out loud. Various eye-tracking parameters were calculated for both types of reading. In general, the performance of the typical students was better for both modes of reading - regardless of age. In the older age groups, typical readers performed better at silent reading. The dyslexic readers in all age groups performed better at reading out loud. However, this was less prominent in secondary and upper secondary dyslexics, reflecting a slow shift towards silent reading mode as they age. Our results confirm that the eye-tracking parameters of dyslexics improve with age in both silent and loud reading, and their reading preference shifts slowly towards silent reading. Typical readers, before 4th grade do not show a clear reading mode preference, however, after that age they develop a clear preference for silent reading.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8565638/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39864683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-21eCollection Date: 2021-01-01DOI: 10.16910/jemr.14.4.6
Suzane Vassallo, Jacinta Douglas
The visual scanpath to emotional facial expressions was recorded in BR, a 35-year-old male with chronic severe traumatic brain injury (TBI), both before and after he underwent intervention. The novel intervention paradigm combined visual scanpath training with verbal feedback and was implemented over a 3-month period using a single case design (AB) with one follow up session. At baseline BR's scanpath was restricted, characterised by gaze allocation primarily to salient facial features on the right side of the face stimulus. Following intervention his visual scanpath became more lateralised, although he continued to demonstrate an attentional bias to the right side of the face stimulus. This study is the first to demonstrate change in both the pattern and the position of the visual scanpath to emotional faces following intervention in a person with chronic severe TBI. In addition, these findings extend upon our previous work to suggest that modification of the visual scanpath through targeted facial feature training can support improved facial recognition performance in a person with severe TBI.
{"title":"Visual scanpath training to emotional faces following severe traumatic brain injury: A single case design.","authors":"Suzane Vassallo, Jacinta Douglas","doi":"10.16910/jemr.14.4.6","DOIUrl":"https://doi.org/10.16910/jemr.14.4.6","url":null,"abstract":"<p><p>The visual scanpath to emotional facial expressions was recorded in BR, a 35-year-old male with chronic severe traumatic brain injury (TBI), both before and after he underwent intervention. The novel intervention paradigm combined visual scanpath training with verbal feedback and was implemented over a 3-month period using a single case design (AB) with one follow up session. At baseline BR's scanpath was restricted, characterised by gaze allocation primarily to salient facial features on the right side of the face stimulus. Following intervention his visual scanpath became more lateralised, although he continued to demonstrate an attentional bias to the right side of the face stimulus. This study is the first to demonstrate change in both the pattern and the position of the visual scanpath to emotional faces following intervention in a person with chronic severe TBI. In addition, these findings extend upon our previous work to suggest that modification of the visual scanpath through targeted facial feature training can support improved facial recognition performance in a person with severe TBI.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8575428/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39716446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-21eCollection Date: 2021-01-01DOI: 10.16910/jemr.14.2.3
Suvi K Holm, Tuomo Häikiö, Konstantin Olli, Johanna K Kaakinen
The role of individual differences during dynamic scene viewing was explored. Participants (N=38) watched a gameplay video of a first-person shooter (FPS) videogame while their eye movements were recorded. In addition, the participants' skills in three visual attention tasks (attentional blink, visual search, and multiple object tracking) were assessed. The results showed that individual differences in visual attention tasks were associated with eye movement patterns observed during viewing of the gameplay video. The differences were noted in four eye movement measures: number of fixations, fixation durations, saccade amplitudes and fixation distances from the center of the screen. The individual differences showed during specific events of the video as well as during the video as a whole. The results highlight that an unedited, fast-paced and cluttered dynamic scene can bring about individual differences in dynamic scene viewing.
{"title":"Eye Movements during Dynamic Scene Viewing are Affected by Visual Attention Skills and Events of the Scene: Evidence from First-Person Shooter Gameplay Videos.","authors":"Suvi K Holm, Tuomo Häikiö, Konstantin Olli, Johanna K Kaakinen","doi":"10.16910/jemr.14.2.3","DOIUrl":"https://doi.org/10.16910/jemr.14.2.3","url":null,"abstract":"<p><p>The role of individual differences during dynamic scene viewing was explored. Participants (N=38) watched a gameplay video of a first-person shooter (FPS) videogame while their eye movements were recorded. In addition, the participants' skills in three visual attention tasks (attentional blink, visual search, and multiple object tracking) were assessed. The results showed that individual differences in visual attention tasks were associated with eye movement patterns observed during viewing of the gameplay video. The differences were noted in four eye movement measures: number of fixations, fixation durations, saccade amplitudes and fixation distances from the center of the screen. The individual differences showed during specific events of the video as well as during the video as a whole. The results highlight that an unedited, fast-paced and cluttered dynamic scene can bring about individual differences in dynamic scene viewing.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8566014/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39864685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-21eCollection Date: 2021-01-01DOI: 10.16910/jemr.14.4.5
Lucas Lörch
The present study investigated how eye movements were associated with performance accuracy during sight-reading. Participants performed a complex span task in which sequences of single quarter note symbols that either enabled chunking or did not enable chunking were presented for subsequent serial recall. In between the presentation of each note, participants sight-read a notated melody on an electric piano in the tempo of 70 bpm. All melodies were unique but contained four types of note pairs: eighth-eighth, eighthquarter, quarter-eighth, quarter-quarter. Analyses revealed that reading with fewer fixations was associated with a more accurate note onset. Fewer fixations might be advantageous for sight-reading as fewer saccades have to be planned and less information has to be integrated. Moreover, the quarter-quarter note pair was read with a larger number of fixations and the eighth-quarter note pair was read with a longer gaze duration. This suggests that when rhythm is processed, additional beats might trigger re-fixations and unconventional rhythmical patterns might trigger longer gazes. Neither recall accuracy nor chunking processes were found to explain additional variance in the eye movement data.
{"title":"The association of eye movements and performance accuracy in a novel sight-reading task.","authors":"Lucas Lörch","doi":"10.16910/jemr.14.4.5","DOIUrl":"https://doi.org/10.16910/jemr.14.4.5","url":null,"abstract":"<p><p>The present study investigated how eye movements were associated with performance accuracy during sight-reading. Participants performed a complex span task in which sequences of single quarter note symbols that either enabled chunking or did not enable chunking were presented for subsequent serial recall. In between the presentation of each note, participants sight-read a notated melody on an electric piano in the tempo of 70 bpm. All melodies were unique but contained four types of note pairs: eighth-eighth, eighthquarter, quarter-eighth, quarter-quarter. Analyses revealed that reading with fewer fixations was associated with a more accurate note onset. Fewer fixations might be advantageous for sight-reading as fewer saccades have to be planned and less information has to be integrated. Moreover, the quarter-quarter note pair was read with a larger number of fixations and the eighth-quarter note pair was read with a longer gaze duration. This suggests that when rhythm is processed, additional beats might trigger re-fixations and unconventional rhythmical patterns might trigger longer gazes. Neither recall accuracy nor chunking processes were found to explain additional variance in the eye movement data.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8573852/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39604841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-21eCollection Date: 2021-01-01DOI: 10.16910/jemr.14.4.3
Yiheng Wang, Yanping Liu
Can longer gaze duration determine risky investment decisions? Recent studies have tested how gaze influences people's decisions and the boundary of the gaze effect. The current experiment used adaptive gaze-contingent manipulation by adding a self-determined option to test whether longer gaze duration can determine risky investment decisions. The results showed that both the expected value of each option and the gaze duration influenced people's decisions. This result was consistent with the attentional diffusion model (aDDM) proposed by Krajbich et al. (2010), which suggests that gaze can influence the choice process by amplify the value of the choice. Therefore, the gaze duration would influence the decision when people do not have clear preference.The result also showed that the similarity between options and the computational difficulty would also influence the gaze effect. This result was inconsistent with prior research that used option similarities to represent difficulty, suggesting that both similarity between options and computational difficulty induce different underlying mechanisms of decision difficulty.
更长的凝视时间能决定高风险的投资决策吗?最近的研究已经测试了凝视如何影响人们的决定和凝视效应的边界。目前的实验使用了自适应注视条件操作,通过增加一个自决定选项来测试更长凝视时间是否可以决定风险投资决策。结果表明,每个选项的期望值和注视时间都会影响人们的决策。这一结果与Krajbich et al.(2010)提出的注意扩散模型(attention diffusion model, aDDM)一致,该模型认为凝视可以通过放大选择的价值来影响选择过程。因此,当人们没有明确的偏好时,凝视时间会影响决策。结果还表明,选项之间的相似性和计算难度也会影响注视效果。这一结果与先前使用期权相似度来表示难度的研究结果不一致,表明期权相似度和计算难度都诱发了不同的决策困难的潜在机制。
{"title":"Can longer gaze duration determine risky investment decisions? An interactive perspective.","authors":"Yiheng Wang, Yanping Liu","doi":"10.16910/jemr.14.4.3","DOIUrl":"https://doi.org/10.16910/jemr.14.4.3","url":null,"abstract":"<p><p>Can longer gaze duration determine risky investment decisions? Recent studies have tested how gaze influences people's decisions and the boundary of the gaze effect. The current experiment used adaptive gaze-contingent manipulation by adding a self-determined option to test whether longer gaze duration can determine risky investment decisions. The results showed that both the expected value of each option and the gaze duration influenced people's decisions. This result was consistent with the attentional diffusion model (aDDM) proposed by Krajbich et al. (2010), which suggests that gaze can influence the choice process by amplify the value of the choice. Therefore, the gaze duration would influence the decision when people do not have clear preference.The result also showed that the similarity between options and the computational difficulty would also influence the gaze effect. This result was inconsistent with prior research that used option similarities to represent difficulty, suggesting that both similarity between options and computational difficulty induce different underlying mechanisms of decision difficulty.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8562223/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39588926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-14eCollection Date: 2020-01-01DOI: 10.16910/jemr.13.3.5
Judith Beck, Lars Konieczny
The present study investigates effects of conventionally metered and rhymed poetry on eyemovements in silent reading. Readers saw MRRL poems (i.e., metrically regular, rhymed language) in two layouts. In poem layout, verse endings coincided with line breaks. In prose layout verse endings could be mid-line. We also added metrical and rhyme anomalies. We hypothesized that silently reading MRRL results in building up auditive expectations that are based on a rhythmic "audible gestalt" and propose that rhythmicity is generated through subvocalization. Our results revealed that readers were sensitive to rhythmic-gestalt-anomalies but showed differential effects in poem and prose layouts. Metrical anomalies in particular resulted in robust reading disruptions across a variety of eye-movement measures in the poem layout and caused re-reading of the local context. Rhyme anomalies elicited stronger effects in prose layout and resulted in systematic re-reading of pre-rhymes. The presence or absence of rhythmic-gestalt-anomalies, as well as the layout manipulation, also affected reading in general. Effects of syllable number indicated a high degree of subvocalization. The overall pattern of results suggests that eye-movements reflect, and are closely aligned with, the rhythmic subvocalization of MRRL. This study introduces a two-stage approach to the analysis of long MRRL stimuli and contributes to the discussion of how the processing of rhythm in music and speech may overlap.
{"title":"Rhythmic subvocalization: An eye-tracking study on silent poetry reading.","authors":"Judith Beck, Lars Konieczny","doi":"10.16910/jemr.13.3.5","DOIUrl":"https://doi.org/10.16910/jemr.13.3.5","url":null,"abstract":"<p><p>The present study investigates effects of conventionally metered and rhymed poetry on eyemovements in silent reading. Readers saw MRRL poems (i.e., metrically regular, rhymed language) in two layouts. In poem layout, verse endings coincided with line breaks. In prose layout verse endings could be mid-line. We also added metrical and rhyme anomalies. We hypothesized that silently reading MRRL results in building up auditive expectations that are based on a rhythmic \"audible gestalt\" and propose that rhythmicity is generated through subvocalization. Our results revealed that readers were sensitive to rhythmic-gestalt-anomalies but showed differential effects in poem and prose layouts. Metrical anomalies in particular resulted in robust reading disruptions across a variety of eye-movement measures in the poem layout and caused re-reading of the local context. Rhyme anomalies elicited stronger effects in prose layout and resulted in systematic re-reading of pre-rhymes. The presence or absence of rhythmic-gestalt-anomalies, as well as the layout manipulation, also affected reading in general. Effects of syllable number indicated a high degree of subvocalization. The overall pattern of results suggests that eye-movements reflect, and are closely aligned with, the rhythmic subvocalization of MRRL. This study introduces a two-stage approach to the analysis of long MRRL stimuli and contributes to the discussion of how the processing of rhythm in music and speech may overlap.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8557949/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39585512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-14eCollection Date: 2021-01-01DOI: 10.16910/jemr.14.4.1
Ondřeji Straka, Šárka Portešová, Daniela Halámková, Michal Jabůrek
In this paper, we inquire into possible differences between children with exceptionally high intellectual abilities and their average peers as regards metacognitive monitoring and related metacognitive strategies. The question whether gifted children surpass their typically developing peers not only in the intellectual abilities, but also in their level of metacognitive skills, has not been convincingly answered so far. We sought to examine the indicators of metacognitive behavior by means of eye-tracking technology and to compare these findings with the participants' subjective confidence ratings. Eye-movement data of gifted and average students attending final grades of primary school (4th and 5th grades) were recorded while they dealt with a deductive reasoning task, and four metrics supposed to bear on metacognitive skills, namely the overall trial duration, mean fixation duration, number of regressions and normalized gaze transition entropy, were analyzed. No significant differences between gifted and average children were found in the normalized gaze transition entropy, in mean fixation duration, nor - after controlling for the trial duration - in number of regressions. Both groups of children differed in the time devoted to solving the task. Both groups significantly differed in the association between time devoted to the task and the participants' subjective confidence rating, where only the gifted children tended to devote more time when they felt less confident. Several implications of these findings are discussed.
{"title":"Metacognitive monitoring and metacognitive strategies of gifted and average children on dealing with deductive reasoning task.","authors":"Ondřeji Straka, Šárka Portešová, Daniela Halámková, Michal Jabůrek","doi":"10.16910/jemr.14.4.1","DOIUrl":"https://doi.org/10.16910/jemr.14.4.1","url":null,"abstract":"<p><p>In this paper, we inquire into possible differences between children with exceptionally high intellectual abilities and their average peers as regards metacognitive monitoring and related metacognitive strategies. The question whether gifted children surpass their typically developing peers not only in the intellectual abilities, but also in their level of metacognitive skills, has not been convincingly answered so far. We sought to examine the indicators of metacognitive behavior by means of eye-tracking technology and to compare these findings with the participants' subjective confidence ratings. Eye-movement data of gifted and average students attending final grades of primary school (4th and 5th grades) were recorded while they dealt with a deductive reasoning task, and four metrics supposed to bear on metacognitive skills, namely the overall trial duration, mean fixation duration, number of regressions and normalized gaze transition entropy, were analyzed. No significant differences between gifted and average children were found in the normalized gaze transition entropy, in mean fixation duration, nor - after controlling for the trial duration - in number of regressions. Both groups of children differed in the time devoted to solving the task. Both groups significantly differed in the association between time devoted to the task and the participants' subjective confidence rating, where only the gifted children tended to devote more time when they felt less confident. Several implications of these findings are discussed.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8559419/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39585514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigated reading comprehension, reading speed, and the quality of eye movements while reading on an iPad, as compared to printed text. 31 visually-normal subjects were enrolled. Two of the passages were read from the Visagraph standardized text on iPad and Print. Eye movement characteristics and comprehension were evaluated. Mean (SD) fixation duration was significantly longer with the iPad at 270 ms (40) compared to the printed text (p=0.04) at 260 ms (40). Subjects’ mean reading rates were significantly lower on the iPad at 294 words per minute (wpm) than the printed text at 318 wpm (p=0.03). The mean (SD) overall reading duration was significantly (p=0.02) slower on the iPad that took 31 s (9.3) than the printed text at 28 s (8.0). Overall reading performance is lower with an iPad than printed text in normal individuals. These findings might be more consequential in children and adult slower readers when they read using iPads.
这项研究调查了在iPad上阅读时的阅读理解、阅读速度和眼球运动质量,并将其与印刷文本进行了比较。纳入31名视力正常的受试者。其中两篇文章是在iPad和Print上阅读Visagraph标准文本。评估眼动特征和理解能力。iPad的平均(SD)注视时间为270 ms(40),显著长于印刷文本的260 ms (40) (p=0.04)。受试者在iPad上的平均阅读速度为每分钟294个单词,显著低于印刷文本的每分钟318个单词(p=0.03)。iPad上的平均(SD)整体阅读时间(31秒,9.3秒)明显慢于印刷文本的28秒(8.0秒)(p=0.02)。使用iPad的整体阅读表现低于正常人的打印文本。这些发现可能对使用ipad阅读速度较慢的儿童和成人更为重要。
{"title":"Reading Eye Movements Performance on iPad vs Print Using a Visagraph.","authors":"Alicia Feis, Amanda Lallensack, Elizabeth Pallante, Melanie Nielsen, Nicole Demarco, Balamurali Vasudevan","doi":"10.16910/jemr.14.2.6","DOIUrl":"https://doi.org/10.16910/jemr.14.2.6","url":null,"abstract":"This study investigated reading comprehension, reading speed, and the quality of eye movements while reading on an iPad, as compared to printed text. 31 visually-normal subjects were enrolled. Two of the passages were read from the Visagraph standardized text on iPad and Print. Eye movement characteristics and comprehension were evaluated. Mean (SD) fixation duration was significantly longer with the iPad at 270 ms (40) compared to the printed text (p=0.04) at 260 ms (40). Subjects’ mean reading rates were significantly lower on the iPad at 294 words per minute (wpm) than the printed text at 318 wpm (p=0.03). The mean (SD) overall reading duration was significantly (p=0.02) slower on the iPad that took 31 s (9.3) than the printed text at 28 s (8.0). Overall reading performance is lower with an iPad than printed text in normal individuals. These findings might be more consequential in children and adult slower readers when they read using iPads.","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8557948/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39585513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-31eCollection Date: 2021-01-01DOI: 10.16910/jemr.14.4.2
L R D Murthy, Siddhi Brahmbhatt, Somnath Arjun, Pradipta Biswas
Gaze estimation problem can be addressed using either model-based or appearance-based approaches. Model-based approaches rely on features extracted from eye images to fit a 3D eye-ball model to obtain gaze point estimate while appearance-based methods attempt to directly map captured eye images to gaze point without any handcrafted features. Recently, availability of large datasets and novel deep learning techniques made appearance-based methods achieve superior accuracy than model-based approaches. However, many appearance- based gaze estimation systems perform well in within-dataset validation but fail to provide the same degree of accuracy in cross-dataset evaluation. Hence, it is still unclear how well the current state-of-the-art approaches perform in real-time in an interactive setting on unseen users. This paper proposes I2DNet, a novel architecture aimed to improve subject- independent gaze estimation accuracy that achieved a state-of-the-art 4.3 and 8.4 degree mean angle error on the MPIIGaze and RT-Gene datasets respectively. We have evaluated the proposed system as a gaze-controlled interface in real-time for a 9-block pointing and selection task and compared it with Webgazer.js and OpenFace 2.0. We have conducted a user study with 16 participants, and our proposed system reduces selection time and the number of missed selections statistically significantly compared to other two systems.
{"title":"I2DNet - Design and Real-Time Evaluation of Appearance-based gaze estimation system.","authors":"L R D Murthy, Siddhi Brahmbhatt, Somnath Arjun, Pradipta Biswas","doi":"10.16910/jemr.14.4.2","DOIUrl":"https://doi.org/10.16910/jemr.14.4.2","url":null,"abstract":"<p><p>Gaze estimation problem can be addressed using either model-based or appearance-based approaches. Model-based approaches rely on features extracted from eye images to fit a 3D eye-ball model to obtain gaze point estimate while appearance-based methods attempt to directly map captured eye images to gaze point without any handcrafted features. Recently, availability of large datasets and novel deep learning techniques made appearance-based methods achieve superior accuracy than model-based approaches. However, many appearance- based gaze estimation systems perform well in within-dataset validation but fail to provide the same degree of accuracy in cross-dataset evaluation. Hence, it is still unclear how well the current state-of-the-art approaches perform in real-time in an interactive setting on unseen users. This paper proposes I2DNet, a novel architecture aimed to improve subject- independent gaze estimation accuracy that achieved a state-of-the-art 4.3 and 8.4 degree mean angle error on the MPIIGaze and RT-Gene datasets respectively. We have evaluated the proposed system as a gaze-controlled interface in real-time for a 9-block pointing and selection task and compared it with Webgazer.js and OpenFace 2.0. We have conducted a user study with 16 participants, and our proposed system reduces selection time and the number of missed selections statistically significantly compared to other two systems.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8561667/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39588925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-27eCollection Date: 2020-01-01DOI: 10.16910/jemr.13.2.15
Arthur Crucq
Linear perspective has long been used to create the illusion of three-dimensional space on the picture plane. One of its central axioms comes from Euclidean geometry and holds that all parallel lines converge in a single vanishing point. Although linear perspective provided the painter with a means to organize the painting, the question is whether the gaze of the beholder is also affected by the underlying structure of linear perspective: for instance, in such a way that the orthogonals leading to the vanishing point also automatically guides the beholder's gaze. This was researched during a pilot study by means of an eye-tracking experiment at the Lab for Cognitive Research in Art History (CReA) of the University of Vienna. It appears that in some compositions the vanishing point attracts the view of the participant. This effect is more significant when the vanishing point coincides with the central vertical axis of the painting, but is even stronger when the vanishing point also coincides with a major visual feature such as an object or figure. The latter calls into question what exactly attracts the gaze of the viewer, i.e., what comes first: the geometrical construct of the vanishing point or the visual feature?
{"title":"Viewing Patterns and Perspectival Paintings: An Eye-Tracking Study on the Effect of the Vanishing Point.","authors":"Arthur Crucq","doi":"10.16910/jemr.13.2.15","DOIUrl":"https://doi.org/10.16910/jemr.13.2.15","url":null,"abstract":"<p><p>Linear perspective has long been used to create the illusion of three-dimensional space on the picture plane. One of its central axioms comes from Euclidean geometry and holds that all parallel lines converge in a single vanishing point. Although linear perspective provided the painter with a means to organize the painting, the question is whether the gaze of the beholder is also affected by the underlying structure of linear perspective: for instance, in such a way that the orthogonals leading to the vanishing point also automatically guides the beholder's gaze. This was researched during a pilot study by means of an eye-tracking experiment at the Lab for Cognitive Research in Art History (CReA) of the University of Vienna. It appears that in some compositions the vanishing point attracts the view of the participant. This effect is more significant when the vanishing point coincides with the central vertical axis of the painting, but is even stronger when the vanishing point also coincides with a major visual feature such as an object or figure. The latter calls into question what exactly attracts the gaze of the viewer, i.e., what comes first: the geometrical construct of the vanishing point or the visual feature?</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8524395/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39538947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}