Lee Friedman, Dillon Lohr, Timothy Hanson, Oleg V Komogortsev
Typically, the position error of an eye-tracking device is measured as the distance of the eye-position from the target position in two-dimensional space (angular offset). Accuracy is the mean angular offset. The mean is a highly interpretable measure of central tendency if the underlying error distribution is unimodal and normal. However, in the context of an underlying multimodal distribution, the mean is less interpretable. We will present evidence that the majority of such distributions are multimodal. Only 14.7% of fixation angular offset distributions were unimodal, and of these, only 11.5% were normally distributed. (Of the entire dataset, 1.7% were unimodal and normal.) This multimodality is true even if there is only a single, continuous tracking fixation segment per trial. We present several approaches to measure accuracy in the face of multimodality. We also address the role of fixation drift in partially explaining multimodality.
{"title":"Angular Offset Distributions During Fixation Are, More Often Than Not, Multimodal.","authors":"Lee Friedman, Dillon Lohr, Timothy Hanson, Oleg V Komogortsev","doi":"10.16910/jemr.14.3.2","DOIUrl":"10.16910/jemr.14.3.2","url":null,"abstract":"<p><p>Typically, the position error of an eye-tracking device is measured as the distance of the eye-position from the target position in two-dimensional space (angular offset). Accuracy is the mean angular offset. The mean is a highly interpretable measure of central tendency if the underlying error distribution is unimodal and normal. However, in the context of an underlying multimodal distribution, the mean is less interpretable. We will present evidence that the majority of such distributions are multimodal. Only 14.7% of fixation angular offset distributions were unimodal, and of these, only 11.5% were normally distributed. (Of the entire dataset, 1.7% were unimodal and normal.) This multimodality is true even if there is only a single, continuous tracking fixation segment per trial. We present several approaches to measure accuracy in the face of multimodality. We also address the role of fixation drift in partially explaining multimodality.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 3","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8189800/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39010217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
<p><p>The control of technological systems by human operators has been the object of study for many decades. The increasing complexity in the digital age has made the optimization of the interaction between system and human operator particularly necessary. In the present thematic issue, ten exemplary articles are presented, ranging from observational field studies to experimental work in highly complex navigation simulators. For the human operator, the processes of attention play a crucial role, which are captured in the contributions listed in this thematic issue by eye-tracking devices. For many decades, eye tracking during car driving has been investigated extensively (e.g. 6; 5). In the present special issue, Cvahte Ojsteršek & Topolšek (4) provide a literature review and scientometric analysis of 139 eye-tracking studies investigating driver distraction. For future studies, the authors recommend a wider variety of distractor stimuli, a larger number of tested participants, and an increasing interdisciplinarity of researchers. In addition to most studies investigating bottom-up processes of covered attention, Tuhkanen, Pekkanen, Lehtonen & Lappi (10) include the experimental control of top-down processes of overt attention in an active visuomotor steering task. The results indicate a bottom-up process of biasing the optic flow of the stimulus input in interaction with the top-down saccade planning induced by the steering task. An expanding area of technological development involves autonomous driving where actions of the human operator directly interact with the programmed reactions of the vehicle. Autonomous driving requires, however, a broader exploration of the entire visual input and less gaze directed towards the road centre. Schnebelen, Charron & Mars (9) conducted experimental research in this area and concluded that gaze dynamics played the most important role in distinguishing between manual and automated driving. Through a combination of advanced gaze tracking systems with the latest vehicle environment sensors, Bickerdt, Wendland, Geisler, Sonnenberg & Kasneci (2021) conducted a study with 50 participants in a driving simulator and propose a novel way to determine perceptual limits which are applicable to realistic driving scenarios. Eye-Computer-Interaction (ECI) is an interactive method of directly controlling a technological device by means of ocular parameters. In this context, Niu, Gao, Xue, Zhang & Yang (8) conducted two experiments to explore the optimum target size and gaze-triggering dwell time in ECI. Their results have an exemplary application value for future interface design. Aircraft training and pilot selection is commonly performed on simulators. This makes it possible to study human capabilities and their limitation in interaction with the simulated technological system. Based on their methodological developments and experimental results, Vlačić, Knežević, Mandal, Rođenkov & Vitsas (11) propose a network approach w
他们的共同目标是,在数字时代,根据人类的能力和局限性,提高技术的潜力和安全性。
{"title":"Eye movements in real and simulated driving and navigation control - Foreword to the Special Issue.","authors":"Rudolf Groner, Enkelejda Kasneci","doi":"10.16910/jemr.12.3.0","DOIUrl":"10.16910/jemr.12.3.0","url":null,"abstract":"<p><p>The control of technological systems by human operators has been the object of study for many decades. The increasing complexity in the digital age has made the optimization of the interaction between system and human operator particularly necessary. In the present thematic issue, ten exemplary articles are presented, ranging from observational field studies to experimental work in highly complex navigation simulators. For the human operator, the processes of attention play a crucial role, which are captured in the contributions listed in this thematic issue by eye-tracking devices. For many decades, eye tracking during car driving has been investigated extensively (e.g. 6; 5). In the present special issue, Cvahte Ojsteršek & Topolšek (4) provide a literature review and scientometric analysis of 139 eye-tracking studies investigating driver distraction. For future studies, the authors recommend a wider variety of distractor stimuli, a larger number of tested participants, and an increasing interdisciplinarity of researchers. In addition to most studies investigating bottom-up processes of covered attention, Tuhkanen, Pekkanen, Lehtonen & Lappi (10) include the experimental control of top-down processes of overt attention in an active visuomotor steering task. The results indicate a bottom-up process of biasing the optic flow of the stimulus input in interaction with the top-down saccade planning induced by the steering task. An expanding area of technological development involves autonomous driving where actions of the human operator directly interact with the programmed reactions of the vehicle. Autonomous driving requires, however, a broader exploration of the entire visual input and less gaze directed towards the road centre. Schnebelen, Charron & Mars (9) conducted experimental research in this area and concluded that gaze dynamics played the most important role in distinguishing between manual and automated driving. Through a combination of advanced gaze tracking systems with the latest vehicle environment sensors, Bickerdt, Wendland, Geisler, Sonnenberg & Kasneci (2021) conducted a study with 50 participants in a driving simulator and propose a novel way to determine perceptual limits which are applicable to realistic driving scenarios. Eye-Computer-Interaction (ECI) is an interactive method of directly controlling a technological device by means of ocular parameters. In this context, Niu, Gao, Xue, Zhang & Yang (8) conducted two experiments to explore the optimum target size and gaze-triggering dwell time in ECI. Their results have an exemplary application value for future interface design. Aircraft training and pilot selection is commonly performed on simulators. This makes it possible to study human capabilities and their limitation in interaction with the simulated technological system. Based on their methodological developments and experimental results, Vlačić, Knežević, Mandal, Rođenkov & Vitsas (11) propose a network approach w","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"12 3","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8182438/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Persian is an Indo-Iranian language that features a derivation of Arabic cursive script, where most letters within words are connectable to adjacent letters with ligatures. Two experiments are reported where the properties of Persian script were utilized to investigate the effects of reducing interword spacing and increasing the interletter distance (ligature) within a word. Experiment 1 revealed that decreasing interword spacing while extending interletter ligature by the same amount was detrimental to reading speed. Experiment 2 largely replicated these findings. The experiments show that providing the readers with inaccurate word boundary information is detrimental to reading rate. This was achieved by reducing the interword space that follows letters that do not connect to the next letter in Experiment 1, and replacing the interword space with ligature that connected the words in Experiment 2. In both experiments, readers were able to comprehend the text read, despite the considerable costs to reading rates in the experimental conditions.
{"title":"Manipulating Interword and Interletter Spacing in Cursive Script: An Eye Movements Investigation of Reading Persian.","authors":"Ehab W Hermena","doi":"10.16910/jemr.14.1.6","DOIUrl":"https://doi.org/10.16910/jemr.14.1.6","url":null,"abstract":"<p><p>Persian is an Indo-Iranian language that features a derivation of Arabic cursive script, where most letters within words are connectable to adjacent letters with ligatures. Two experiments are reported where the properties of Persian script were utilized to investigate the effects of reducing interword spacing and increasing the interletter distance (ligature) within a word. Experiment 1 revealed that decreasing interword spacing while extending interletter ligature by the same amount was detrimental to reading speed. Experiment 2 largely replicated these findings. The experiments show that providing the readers with inaccurate word boundary information is detrimental to reading rate. This was achieved by reducing the interword space that follows letters that do not connect to the next letter in Experiment 1, and replacing the interword space with ligature that connected the words in Experiment 2. In both experiments, readers were able to comprehend the text read, despite the considerable costs to reading rates in the experimental conditions.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8189716/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cengiz Acarturk, Bipin Indurkya, Piotr Nawrocki, Bartlomiej Sniezynski, Mateusz Jarosz, Kerem Alp Usal
We report the results of an empirical study on gaze aversion during dyadic human-to-human conversation in an interview setting. To address various methodological challenges in assessing gaze-to-face contact, we followed an approach where the experiment was conducted twice, each time with a different set of interviewees. In one of them the interviewer's gaze was tracked with an eye tracker, and in the other the interviewee's gaze was tracked. The gaze sequences obtained in both experiments were analyzed and modeled as Discrete-Time Markov Chains. The results show that the interviewer made more frequent and longer gaze contacts compared to the interviewee. Also, the interviewer made mostly diagonal gaze aversions, whereas the interviewee made sideways aversions (left or right). We discuss the relevance of this research for Human-Robot Interaction, and discuss some future research problems.
{"title":"Gaze aversion in conversational settings: An investigation based on mock job interview.","authors":"Cengiz Acarturk, Bipin Indurkya, Piotr Nawrocki, Bartlomiej Sniezynski, Mateusz Jarosz, Kerem Alp Usal","doi":"10.16910/jemr.14.1.1","DOIUrl":"https://doi.org/10.16910/jemr.14.1.1","url":null,"abstract":"<p><p>We report the results of an empirical study on gaze aversion during dyadic human-to-human conversation in an interview setting. To address various methodological challenges in assessing gaze-to-face contact, we followed an approach where the experiment was conducted twice, each time with a different set of interviewees. In one of them the interviewer's gaze was tracked with an eye tracker, and in the other the interviewee's gaze was tracked. The gaze sequences obtained in both experiments were analyzed and modeled as Discrete-Time Markov Chains. The results show that the interviewer made more frequent and longer gaze contacts compared to the interviewee. Also, the interviewer made mostly diagonal gaze aversions, whereas the interviewee made sideways aversions (left or right). We discuss the relevance of this research for Human-Robot Interaction, and discuss some future research problems.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8188832/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Felix S Wang, Julian Wolf, Mazda Farshad, Mirko Meboldt, Quentin Lohmeyer
Eye tracking (ET) has shown to reveal the wearer's cognitive processes using the measurement of the central point of foveal vision. However, traditional ET evaluation methods have not been able to take into account the wearers' use of the peripheral field of vision. We propose an algorithmic enhancement to a state-of-the-art ET analysis method, the Object- Gaze Distance (OGD), which additionally allows the quantification of near-peripheral gaze behavior in complex real-world environments. The algorithm uses machine learning for area of interest (AOI) detection and computes the minimal 2D Euclidean pixel distance to the gaze point, creating a continuous gaze-based time-series. Based on an evaluation of two AOIs in a real surgical procedure, the results show that a considerable increase of interpretable fixation data from 23.8 % to 78.3 % of AOI screw and from 4.5 % to 67.2 % of AOI screwdriver was achieved, when incorporating the near-peripheral field of vision. Additionally, the evaluation of a multi-OGD time series representation has shown the potential to reveal novel gaze patterns, which may provide a more accurate depiction of human gaze behavior in multi-object environments.
{"title":"Object-Gaze Distance: Quantifying Near- Peripheral Gaze Behavior in Real-World Applications.","authors":"Felix S Wang, Julian Wolf, Mazda Farshad, Mirko Meboldt, Quentin Lohmeyer","doi":"10.16910/jemr.14.1.5","DOIUrl":"https://doi.org/10.16910/jemr.14.1.5","url":null,"abstract":"<p><p>Eye tracking (ET) has shown to reveal the wearer's cognitive processes using the measurement of the central point of foveal vision. However, traditional ET evaluation methods have not been able to take into account the wearers' use of the peripheral field of vision. We propose an algorithmic enhancement to a state-of-the-art ET analysis method, the Object- Gaze Distance (OGD), which additionally allows the quantification of near-peripheral gaze behavior in complex real-world environments. The algorithm uses machine learning for area of interest (AOI) detection and computes the minimal 2D Euclidean pixel distance to the gaze point, creating a continuous gaze-based time-series. Based on an evaluation of two AOIs in a real surgical procedure, the results show that a considerable increase of interpretable fixation data from 23.8 % to 78.3 % of AOI screw and from 4.5 % to 67.2 % of AOI screwdriver was achieved, when incorporating the near-peripheral field of vision. Additionally, the evaluation of a multi-OGD time series representation has shown the potential to reveal novel gaze patterns, which may provide a more accurate depiction of human gaze behavior in multi-object environments.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8189527/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daria Ivanchenko, Katharina Rifai, Ziad M Hafed, Frank Schaeffel
We describe a high-performance, pupil-based binocular eye tracker that approaches the performance of a well-established commercial system, but at a fraction of the cost. The eye tracker is built from standard hardware components, and its software (written in Visual C++) can be easily implemented. Because of its fast and simple linear calibration scheme, the eye tracker performs best in the central 10 degrees of the visual field. The eye tracker possesses a number of useful features: (1) automated calibration simultaneously in both eyes while subjects fixate four fixation points sequentially on a computer screen, (2) automated realtime continuous analysis of measurement noise, (3) automated blink detection, (4) and realtime analysis of pupil centration artifacts. This last feature is critical because it is known that pupil diameter changes can be erroneously registered by pupil-based trackers as a change in eye position. We evaluated the performance of our system against that of a wellestablished commercial system using simultaneous measurements in 10 participants. We propose our low-cost eye tracker as a promising resource for studies of binocular eye movements.
{"title":"A low-cost, high-performance video-based binocular eye tracker for psychophysical research.","authors":"Daria Ivanchenko, Katharina Rifai, Ziad M Hafed, Frank Schaeffel","doi":"10.16910/jemr.14.3.3","DOIUrl":"https://doi.org/10.16910/jemr.14.3.3","url":null,"abstract":"<p><p>We describe a high-performance, pupil-based binocular eye tracker that approaches the performance of a well-established commercial system, but at a fraction of the cost. The eye tracker is built from standard hardware components, and its software (written in Visual C++) can be easily implemented. Because of its fast and simple linear calibration scheme, the eye tracker performs best in the central 10 degrees of the visual field. The eye tracker possesses a number of useful features: (1) automated calibration simultaneously in both eyes while subjects fixate four fixation points sequentially on a computer screen, (2) automated realtime continuous analysis of measurement noise, (3) automated blink detection, (4) and realtime analysis of pupil centration artifacts. This last feature is critical because it is known that pupil diameter changes can be erroneously registered by pupil-based trackers as a change in eye position. We evaluated the performance of our system against that of a wellestablished commercial system using simultaneous measurements in 10 participants. We propose our low-cost eye tracker as a promising resource for studies of binocular eye movements.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 3","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8190563/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39010218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrea Schneider, Andreas Sonderegger, Eva Krueger, Quentin Meteier, Patrick Luethold, Alain Chavaillaz
In previous research, microsaccades have been suggested as psychophysiological indicators of task load. So far, it is still under debate how different types of task demands are influencing microsaccade rate. This piece of research examines the relation between visual load, mental load and microsaccade rate. Fourteen participants carried out a continuous performance task (n-back), in which visual (letters vs. abstract figures) and mental task load (1-back to 4-back) were manipulated as within-subjects variables. Eye tracking data, performance data as well as subjective workload were recorded. Data analysis revealed an increased level of microsaccade rate for stimuli of high visual demand (i.e. abstract figures), while mental demand (n-back-level) did not modulate microsaccade rate. In conclusion, the present results suggest that microsaccade rate reflects visual load of a task rather than its mental load.
{"title":"The interplay between task difficulty and microsaccade rate: Evidence for the critical role of visual load.","authors":"Andrea Schneider, Andreas Sonderegger, Eva Krueger, Quentin Meteier, Patrick Luethold, Alain Chavaillaz","doi":"10.16910/jemr.13.5.6","DOIUrl":"10.16910/jemr.13.5.6","url":null,"abstract":"<p><p>In previous research, microsaccades have been suggested as psychophysiological indicators of task load. So far, it is still under debate how different types of task demands are influencing microsaccade rate. This piece of research examines the relation between visual load, mental load and microsaccade rate. Fourteen participants carried out a continuous performance task (n-back), in which visual (letters vs. abstract figures) and mental task load (1-back to 4-back) were manipulated as within-subjects variables. Eye tracking data, performance data as well as subjective workload were recorded. Data analysis revealed an increased level of microsaccade rate for stimuli of high visual demand (i.e. abstract figures), while mental demand (n-back-level) did not modulate microsaccade rate. In conclusion, the present results suggest that microsaccade rate reflects visual load of a task rather than its mental load.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"13 5","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8188521/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jan Bickerdt, Hannes Wendland, David Geisler, Jan Sonnenberg, Enkelejda Kasneci
Combining advanced gaze tracking systems with the latest vehicle environment sensors opens up new fields of applications for driver assistance. Gaze tracking enables researchers to determine the location of a fixation, and under consideration of the visual saliency of the scene, to predict visual perception of objects. The perceptual limits, for stimulus identification, found in literature have mostly been determined in laboratory conditions using isolated stimuli, with a fixed gaze point, on a single screen with limited coverage of the field of view. The found limits are usually reported as hard limits. Such commonly used limits are therefore not applicable to settings with a wide field of view, natural viewing behavior and multi-stimuli. As handling of sudden, potentially critical driving maneuvers heavily relies on peripheral vision, the peripheral limits for feature perception need to be included in the determined perceptual limits. To analyze the human visual perception of different, simultaneously occurring, object changes (shape, color, movement) we conducted a study with 50 participants, in a driving simulator and we propose a novel way to determine perceptual limits, which is more applicable to driving scenarios.
{"title":"Beyond the tracked line of sight - Evaluation of the peripheral usable field of view in a simulator setting.","authors":"Jan Bickerdt, Hannes Wendland, David Geisler, Jan Sonnenberg, Enkelejda Kasneci","doi":"10.16910/jemr.12.3.9","DOIUrl":"https://doi.org/10.16910/jemr.12.3.9","url":null,"abstract":"<p><p>Combining advanced gaze tracking systems with the latest vehicle environment sensors opens up new fields of applications for driver assistance. Gaze tracking enables researchers to determine the location of a fixation, and under consideration of the visual saliency of the scene, to predict visual perception of objects. The perceptual limits, for stimulus identification, found in literature have mostly been determined in laboratory conditions using isolated stimuli, with a fixed gaze point, on a single screen with limited coverage of the field of view. The found limits are usually reported as hard limits. Such commonly used limits are therefore not applicable to settings with a wide field of view, natural viewing behavior and multi-stimuli. As handling of sudden, potentially critical driving maneuvers heavily relies on peripheral vision, the peripheral limits for feature perception need to be included in the determined perceptual limits. To analyze the human visual perception of different, simultaneously occurring, object changes (shape, color, movement) we conducted a study with 50 participants, in a driving simulator and we propose a novel way to determine perceptual limits, which is more applicable to driving scenarios.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"12 3","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8183303/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In our exploratory study, we ask how naive observers, without a distinct religious background, approach biblical art that combines image and text. For this purpose, we choose the book 'New biblical figures of the Old and New Testament' published in 1569 as source of the stimuli. This book belongs to the genre of illustrated Bibles, which were very popular during the Reformation. Since there is no empirical knowledge regarding the interaction between image and text during the process of such biblical art reception, we selected four relevant images from the book and measured the eye movements of participants in order to characterize and quantify their scanning behavior related to such stimuli in terms of i) looking at text (text usage), ii) text vs. image interaction measures (semantic or contextual relevance of text), and iii) narration. We show that texts capture attention early in the process of inspection and that text and image interact. Moreover, semantics of texts are used to guide eye movements later through the image, supporting the formation of the narrative.
{"title":"Interaction between image and text during the process of biblical art reception.","authors":"Gregor Hardiess, Caecilie Weissert","doi":"10.16910/jemr.13.2.14","DOIUrl":"https://doi.org/10.16910/jemr.13.2.14","url":null,"abstract":"<p><p>In our exploratory study, we ask how naive observers, without a distinct religious background, approach biblical art that combines image and text. For this purpose, we choose the book 'New biblical figures of the Old and New Testament' published in 1569 as source of the stimuli. This book belongs to the genre of illustrated Bibles, which were very popular during the Reformation. Since there is no empirical knowledge regarding the interaction between image and text during the process of such biblical art reception, we selected four relevant images from the book and measured the eye movements of participants in order to characterize and quantify their scanning behavior related to such stimuli in terms of i) looking at text (text usage), ii) text vs. image interaction measures (semantic or contextual relevance of text), and iii) narration. We show that texts capture attention early in the process of inspection and that text and image interact. Moreover, semantics of texts are used to guide eye movements later through the image, supporting the formation of the narrative.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"13 2","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8019328/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25585594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sicong Liu, Rachel Donaldson, Ashwin Subramaniam, Hannah Palmer, Cosette D Champion, Morgan L Cox, L Gregory Appelbaum
Expertise in laparoscopic surgery is realized through both manual dexterity and efficient eye movement patterns, creating opportunities to use gaze information in the educational process. To better understand how expert gaze behaviors are acquired through deliberate practice of technical skills, three surgeons were assessed and five novices were trained and assessed in a 5-visit protocol on the Fundamentals of Laparoscopic Surgery peg transfer task. The task was adjusted to have a fixed action sequence to allow recordings of dwell durations based on pre-defined areas of interest (AOIs). Trained novices were shown to reach more than 98% (M = 98.62%, SD = 1.06%) of their behavioral learning plateaus, leading to equivalent behavioral performance to that of surgeons. Despite this equivalence in behavioral performance, surgeons continued to show significantly shorter dwell durations at visual targets of current actions and longer dwell durations at future steps in the action sequence than trained novices (ps ≤ .03, Cohen's ds > 2). This study demonstrates that, while novices can train to match surgeons on behavioral performance, their gaze pattern is still less efficient than that of surgeons, motivating surgical training programs to involve eye tracking technology in their design and evaluation.
{"title":"Developing Expert Gaze Pattern in Laparoscopic Surgery Requires More than Behavioral Training.","authors":"Sicong Liu, Rachel Donaldson, Ashwin Subramaniam, Hannah Palmer, Cosette D Champion, Morgan L Cox, L Gregory Appelbaum","doi":"10.16910/jemr.14.2.2","DOIUrl":"https://doi.org/10.16910/jemr.14.2.2","url":null,"abstract":"<p><p>Expertise in laparoscopic surgery is realized through both manual dexterity and efficient eye movement patterns, creating opportunities to use gaze information in the educational process. To better understand how expert gaze behaviors are acquired through deliberate practice of technical skills, three surgeons were assessed and five novices were trained and assessed in a 5-visit protocol on the Fundamentals of Laparoscopic Surgery peg transfer task. The task was adjusted to have a fixed action sequence to allow recordings of dwell durations based on pre-defined areas of interest (AOIs). Trained novices were shown to reach more than 98% (M = 98.62%, SD = 1.06%) of their behavioral learning plateaus, leading to equivalent behavioral performance to that of surgeons. Despite this equivalence in behavioral performance, surgeons continued to show significantly shorter dwell durations at visual targets of current actions and longer dwell durations at future steps in the action sequence than trained novices (ps ≤ .03, Cohen's ds > 2). This study demonstrates that, while novices can train to match surgeons on behavioral performance, their gaze pattern is still less efficient than that of surgeons, motivating surgical training programs to involve eye tracking technology in their design and evaluation.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 2","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8019143/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25568834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}