Persian is an Indo-Iranian language that features a derivation of Arabic cursive script, where most letters within words are connectable to adjacent letters with ligatures. Two experiments are reported where the properties of Persian script were utilized to investigate the effects of reducing interword spacing and increasing the interletter distance (ligature) within a word. Experiment 1 revealed that decreasing interword spacing while extending interletter ligature by the same amount was detrimental to reading speed. Experiment 2 largely replicated these findings. The experiments show that providing the readers with inaccurate word boundary information is detrimental to reading rate. This was achieved by reducing the interword space that follows letters that do not connect to the next letter in Experiment 1, and replacing the interword space with ligature that connected the words in Experiment 2. In both experiments, readers were able to comprehend the text read, despite the considerable costs to reading rates in the experimental conditions.
{"title":"Manipulating Interword and Interletter Spacing in Cursive Script: An Eye Movements Investigation of Reading Persian.","authors":"Ehab W Hermena","doi":"10.16910/jemr.14.1.6","DOIUrl":"https://doi.org/10.16910/jemr.14.1.6","url":null,"abstract":"<p><p>Persian is an Indo-Iranian language that features a derivation of Arabic cursive script, where most letters within words are connectable to adjacent letters with ligatures. Two experiments are reported where the properties of Persian script were utilized to investigate the effects of reducing interword spacing and increasing the interletter distance (ligature) within a word. Experiment 1 revealed that decreasing interword spacing while extending interletter ligature by the same amount was detrimental to reading speed. Experiment 2 largely replicated these findings. The experiments show that providing the readers with inaccurate word boundary information is detrimental to reading rate. This was achieved by reducing the interword space that follows letters that do not connect to the next letter in Experiment 1, and replacing the interword space with ligature that connected the words in Experiment 2. In both experiments, readers were able to comprehend the text read, despite the considerable costs to reading rates in the experimental conditions.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8189716/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cengiz Acarturk, Bipin Indurkya, Piotr Nawrocki, Bartlomiej Sniezynski, Mateusz Jarosz, Kerem Alp Usal
We report the results of an empirical study on gaze aversion during dyadic human-to-human conversation in an interview setting. To address various methodological challenges in assessing gaze-to-face contact, we followed an approach where the experiment was conducted twice, each time with a different set of interviewees. In one of them the interviewer's gaze was tracked with an eye tracker, and in the other the interviewee's gaze was tracked. The gaze sequences obtained in both experiments were analyzed and modeled as Discrete-Time Markov Chains. The results show that the interviewer made more frequent and longer gaze contacts compared to the interviewee. Also, the interviewer made mostly diagonal gaze aversions, whereas the interviewee made sideways aversions (left or right). We discuss the relevance of this research for Human-Robot Interaction, and discuss some future research problems.
{"title":"Gaze aversion in conversational settings: An investigation based on mock job interview.","authors":"Cengiz Acarturk, Bipin Indurkya, Piotr Nawrocki, Bartlomiej Sniezynski, Mateusz Jarosz, Kerem Alp Usal","doi":"10.16910/jemr.14.1.1","DOIUrl":"https://doi.org/10.16910/jemr.14.1.1","url":null,"abstract":"<p><p>We report the results of an empirical study on gaze aversion during dyadic human-to-human conversation in an interview setting. To address various methodological challenges in assessing gaze-to-face contact, we followed an approach where the experiment was conducted twice, each time with a different set of interviewees. In one of them the interviewer's gaze was tracked with an eye tracker, and in the other the interviewee's gaze was tracked. The gaze sequences obtained in both experiments were analyzed and modeled as Discrete-Time Markov Chains. The results show that the interviewer made more frequent and longer gaze contacts compared to the interviewee. Also, the interviewer made mostly diagonal gaze aversions, whereas the interviewee made sideways aversions (left or right). We discuss the relevance of this research for Human-Robot Interaction, and discuss some future research problems.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8188832/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Felix S Wang, Julian Wolf, Mazda Farshad, Mirko Meboldt, Quentin Lohmeyer
Eye tracking (ET) has shown to reveal the wearer's cognitive processes using the measurement of the central point of foveal vision. However, traditional ET evaluation methods have not been able to take into account the wearers' use of the peripheral field of vision. We propose an algorithmic enhancement to a state-of-the-art ET analysis method, the Object- Gaze Distance (OGD), which additionally allows the quantification of near-peripheral gaze behavior in complex real-world environments. The algorithm uses machine learning for area of interest (AOI) detection and computes the minimal 2D Euclidean pixel distance to the gaze point, creating a continuous gaze-based time-series. Based on an evaluation of two AOIs in a real surgical procedure, the results show that a considerable increase of interpretable fixation data from 23.8 % to 78.3 % of AOI screw and from 4.5 % to 67.2 % of AOI screwdriver was achieved, when incorporating the near-peripheral field of vision. Additionally, the evaluation of a multi-OGD time series representation has shown the potential to reveal novel gaze patterns, which may provide a more accurate depiction of human gaze behavior in multi-object environments.
{"title":"Object-Gaze Distance: Quantifying Near- Peripheral Gaze Behavior in Real-World Applications.","authors":"Felix S Wang, Julian Wolf, Mazda Farshad, Mirko Meboldt, Quentin Lohmeyer","doi":"10.16910/jemr.14.1.5","DOIUrl":"https://doi.org/10.16910/jemr.14.1.5","url":null,"abstract":"<p><p>Eye tracking (ET) has shown to reveal the wearer's cognitive processes using the measurement of the central point of foveal vision. However, traditional ET evaluation methods have not been able to take into account the wearers' use of the peripheral field of vision. We propose an algorithmic enhancement to a state-of-the-art ET analysis method, the Object- Gaze Distance (OGD), which additionally allows the quantification of near-peripheral gaze behavior in complex real-world environments. The algorithm uses machine learning for area of interest (AOI) detection and computes the minimal 2D Euclidean pixel distance to the gaze point, creating a continuous gaze-based time-series. Based on an evaluation of two AOIs in a real surgical procedure, the results show that a considerable increase of interpretable fixation data from 23.8 % to 78.3 % of AOI screw and from 4.5 % to 67.2 % of AOI screwdriver was achieved, when incorporating the near-peripheral field of vision. Additionally, the evaluation of a multi-OGD time series representation has shown the potential to reveal novel gaze patterns, which may provide a more accurate depiction of human gaze behavior in multi-object environments.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8189527/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daria Ivanchenko, Katharina Rifai, Ziad M Hafed, Frank Schaeffel
We describe a high-performance, pupil-based binocular eye tracker that approaches the performance of a well-established commercial system, but at a fraction of the cost. The eye tracker is built from standard hardware components, and its software (written in Visual C++) can be easily implemented. Because of its fast and simple linear calibration scheme, the eye tracker performs best in the central 10 degrees of the visual field. The eye tracker possesses a number of useful features: (1) automated calibration simultaneously in both eyes while subjects fixate four fixation points sequentially on a computer screen, (2) automated realtime continuous analysis of measurement noise, (3) automated blink detection, (4) and realtime analysis of pupil centration artifacts. This last feature is critical because it is known that pupil diameter changes can be erroneously registered by pupil-based trackers as a change in eye position. We evaluated the performance of our system against that of a wellestablished commercial system using simultaneous measurements in 10 participants. We propose our low-cost eye tracker as a promising resource for studies of binocular eye movements.
{"title":"A low-cost, high-performance video-based binocular eye tracker for psychophysical research.","authors":"Daria Ivanchenko, Katharina Rifai, Ziad M Hafed, Frank Schaeffel","doi":"10.16910/jemr.14.3.3","DOIUrl":"https://doi.org/10.16910/jemr.14.3.3","url":null,"abstract":"<p><p>We describe a high-performance, pupil-based binocular eye tracker that approaches the performance of a well-established commercial system, but at a fraction of the cost. The eye tracker is built from standard hardware components, and its software (written in Visual C++) can be easily implemented. Because of its fast and simple linear calibration scheme, the eye tracker performs best in the central 10 degrees of the visual field. The eye tracker possesses a number of useful features: (1) automated calibration simultaneously in both eyes while subjects fixate four fixation points sequentially on a computer screen, (2) automated realtime continuous analysis of measurement noise, (3) automated blink detection, (4) and realtime analysis of pupil centration artifacts. This last feature is critical because it is known that pupil diameter changes can be erroneously registered by pupil-based trackers as a change in eye position. We evaluated the performance of our system against that of a wellestablished commercial system using simultaneous measurements in 10 participants. We propose our low-cost eye tracker as a promising resource for studies of binocular eye movements.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 3","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8190563/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39010218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrea Schneider, Andreas Sonderegger, Eva Krueger, Quentin Meteier, Patrick Luethold, Alain Chavaillaz
In previous research, microsaccades have been suggested as psychophysiological indicators of task load. So far, it is still under debate how different types of task demands are influencing microsaccade rate. This piece of research examines the relation between visual load, mental load and microsaccade rate. Fourteen participants carried out a continuous performance task (n-back), in which visual (letters vs. abstract figures) and mental task load (1-back to 4-back) were manipulated as within-subjects variables. Eye tracking data, performance data as well as subjective workload were recorded. Data analysis revealed an increased level of microsaccade rate for stimuli of high visual demand (i.e. abstract figures), while mental demand (n-back-level) did not modulate microsaccade rate. In conclusion, the present results suggest that microsaccade rate reflects visual load of a task rather than its mental load.
{"title":"The interplay between task difficulty and microsaccade rate: Evidence for the critical role of visual load.","authors":"Andrea Schneider, Andreas Sonderegger, Eva Krueger, Quentin Meteier, Patrick Luethold, Alain Chavaillaz","doi":"10.16910/jemr.13.5.6","DOIUrl":"10.16910/jemr.13.5.6","url":null,"abstract":"<p><p>In previous research, microsaccades have been suggested as psychophysiological indicators of task load. So far, it is still under debate how different types of task demands are influencing microsaccade rate. This piece of research examines the relation between visual load, mental load and microsaccade rate. Fourteen participants carried out a continuous performance task (n-back), in which visual (letters vs. abstract figures) and mental task load (1-back to 4-back) were manipulated as within-subjects variables. Eye tracking data, performance data as well as subjective workload were recorded. Data analysis revealed an increased level of microsaccade rate for stimuli of high visual demand (i.e. abstract figures), while mental demand (n-back-level) did not modulate microsaccade rate. In conclusion, the present results suggest that microsaccade rate reflects visual load of a task rather than its mental load.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"13 5","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8188521/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jan Bickerdt, Hannes Wendland, David Geisler, Jan Sonnenberg, Enkelejda Kasneci
Combining advanced gaze tracking systems with the latest vehicle environment sensors opens up new fields of applications for driver assistance. Gaze tracking enables researchers to determine the location of a fixation, and under consideration of the visual saliency of the scene, to predict visual perception of objects. The perceptual limits, for stimulus identification, found in literature have mostly been determined in laboratory conditions using isolated stimuli, with a fixed gaze point, on a single screen with limited coverage of the field of view. The found limits are usually reported as hard limits. Such commonly used limits are therefore not applicable to settings with a wide field of view, natural viewing behavior and multi-stimuli. As handling of sudden, potentially critical driving maneuvers heavily relies on peripheral vision, the peripheral limits for feature perception need to be included in the determined perceptual limits. To analyze the human visual perception of different, simultaneously occurring, object changes (shape, color, movement) we conducted a study with 50 participants, in a driving simulator and we propose a novel way to determine perceptual limits, which is more applicable to driving scenarios.
{"title":"Beyond the tracked line of sight - Evaluation of the peripheral usable field of view in a simulator setting.","authors":"Jan Bickerdt, Hannes Wendland, David Geisler, Jan Sonnenberg, Enkelejda Kasneci","doi":"10.16910/jemr.12.3.9","DOIUrl":"https://doi.org/10.16910/jemr.12.3.9","url":null,"abstract":"<p><p>Combining advanced gaze tracking systems with the latest vehicle environment sensors opens up new fields of applications for driver assistance. Gaze tracking enables researchers to determine the location of a fixation, and under consideration of the visual saliency of the scene, to predict visual perception of objects. The perceptual limits, for stimulus identification, found in literature have mostly been determined in laboratory conditions using isolated stimuli, with a fixed gaze point, on a single screen with limited coverage of the field of view. The found limits are usually reported as hard limits. Such commonly used limits are therefore not applicable to settings with a wide field of view, natural viewing behavior and multi-stimuli. As handling of sudden, potentially critical driving maneuvers heavily relies on peripheral vision, the peripheral limits for feature perception need to be included in the determined perceptual limits. To analyze the human visual perception of different, simultaneously occurring, object changes (shape, color, movement) we conducted a study with 50 participants, in a driving simulator and we propose a novel way to determine perceptual limits, which is more applicable to driving scenarios.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"12 3","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8183303/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In our exploratory study, we ask how naive observers, without a distinct religious background, approach biblical art that combines image and text. For this purpose, we choose the book 'New biblical figures of the Old and New Testament' published in 1569 as source of the stimuli. This book belongs to the genre of illustrated Bibles, which were very popular during the Reformation. Since there is no empirical knowledge regarding the interaction between image and text during the process of such biblical art reception, we selected four relevant images from the book and measured the eye movements of participants in order to characterize and quantify their scanning behavior related to such stimuli in terms of i) looking at text (text usage), ii) text vs. image interaction measures (semantic or contextual relevance of text), and iii) narration. We show that texts capture attention early in the process of inspection and that text and image interact. Moreover, semantics of texts are used to guide eye movements later through the image, supporting the formation of the narrative.
{"title":"Interaction between image and text during the process of biblical art reception.","authors":"Gregor Hardiess, Caecilie Weissert","doi":"10.16910/jemr.13.2.14","DOIUrl":"https://doi.org/10.16910/jemr.13.2.14","url":null,"abstract":"<p><p>In our exploratory study, we ask how naive observers, without a distinct religious background, approach biblical art that combines image and text. For this purpose, we choose the book 'New biblical figures of the Old and New Testament' published in 1569 as source of the stimuli. This book belongs to the genre of illustrated Bibles, which were very popular during the Reformation. Since there is no empirical knowledge regarding the interaction between image and text during the process of such biblical art reception, we selected four relevant images from the book and measured the eye movements of participants in order to characterize and quantify their scanning behavior related to such stimuli in terms of i) looking at text (text usage), ii) text vs. image interaction measures (semantic or contextual relevance of text), and iii) narration. We show that texts capture attention early in the process of inspection and that text and image interact. Moreover, semantics of texts are used to guide eye movements later through the image, supporting the formation of the narrative.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"13 2","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8019328/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25585594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sicong Liu, Rachel Donaldson, Ashwin Subramaniam, Hannah Palmer, Cosette D Champion, Morgan L Cox, L Gregory Appelbaum
Expertise in laparoscopic surgery is realized through both manual dexterity and efficient eye movement patterns, creating opportunities to use gaze information in the educational process. To better understand how expert gaze behaviors are acquired through deliberate practice of technical skills, three surgeons were assessed and five novices were trained and assessed in a 5-visit protocol on the Fundamentals of Laparoscopic Surgery peg transfer task. The task was adjusted to have a fixed action sequence to allow recordings of dwell durations based on pre-defined areas of interest (AOIs). Trained novices were shown to reach more than 98% (M = 98.62%, SD = 1.06%) of their behavioral learning plateaus, leading to equivalent behavioral performance to that of surgeons. Despite this equivalence in behavioral performance, surgeons continued to show significantly shorter dwell durations at visual targets of current actions and longer dwell durations at future steps in the action sequence than trained novices (ps ≤ .03, Cohen's ds > 2). This study demonstrates that, while novices can train to match surgeons on behavioral performance, their gaze pattern is still less efficient than that of surgeons, motivating surgical training programs to involve eye tracking technology in their design and evaluation.
{"title":"Developing Expert Gaze Pattern in Laparoscopic Surgery Requires More than Behavioral Training.","authors":"Sicong Liu, Rachel Donaldson, Ashwin Subramaniam, Hannah Palmer, Cosette D Champion, Morgan L Cox, L Gregory Appelbaum","doi":"10.16910/jemr.14.2.2","DOIUrl":"https://doi.org/10.16910/jemr.14.2.2","url":null,"abstract":"<p><p>Expertise in laparoscopic surgery is realized through both manual dexterity and efficient eye movement patterns, creating opportunities to use gaze information in the educational process. To better understand how expert gaze behaviors are acquired through deliberate practice of technical skills, three surgeons were assessed and five novices were trained and assessed in a 5-visit protocol on the Fundamentals of Laparoscopic Surgery peg transfer task. The task was adjusted to have a fixed action sequence to allow recordings of dwell durations based on pre-defined areas of interest (AOIs). Trained novices were shown to reach more than 98% (M = 98.62%, SD = 1.06%) of their behavioral learning plateaus, leading to equivalent behavioral performance to that of surgeons. Despite this equivalence in behavioral performance, surgeons continued to show significantly shorter dwell durations at visual targets of current actions and longer dwell durations at future steps in the action sequence than trained novices (ps ≤ .03, Cohen's ds > 2). This study demonstrates that, while novices can train to match surgeons on behavioral performance, their gaze pattern is still less efficient than that of surgeons, motivating surgical training programs to involve eye tracking technology in their design and evaluation.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 2","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8019143/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25568834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many studies deal with solution strategies in mental-rotation tests. The approaches range from global analysis, attention to object parts, holistic and piecemeal strategy to a combined strategy. Other studies do not speak of strategies, but of holistic or piecemeal processes or even of holistic or piecemeal rotation. The methodological approach used here is to identify mental-rotation strategies via gaze patterns derived from eye-tracking data when solving chronometric mental-rotation tasks with gender-stereotyped objects. The mental-rotation test consists of 3 male-stereotyped objects (locomotive, hammer, wrench) and 3 femalestereotyped objects (pram, hand mirror, brush) rotated at eight different angles. The sample consisted of 16 women and 10 men (age: M=21.58; SD=4.21). The results of a qualitative analysis with two individual objects (wrench and brush) showed four different gaze patterns. These gaze patterns appeared with different frequency in the two objects and correlated differently with performance and response time. The results indicate either an objectoriented or an egocentric mental-rotation strategy behind the gaze patterns. In general, a new methodological approach has been developed to identify mental-rotation strategies bottom-up which can also be used for other stimulus types.
{"title":"Identifying solution strategies in a mentalrotation test with gender-stereotyped objects.","authors":"Mirko Saunders, Claudia M Quaiser-Pohl","doi":"10.16910/jemr.13.6.5","DOIUrl":"https://doi.org/10.16910/jemr.13.6.5","url":null,"abstract":"<p><p>Many studies deal with solution strategies in mental-rotation tests. The approaches range from global analysis, attention to object parts, holistic and piecemeal strategy to a combined strategy. Other studies do not speak of strategies, but of holistic or piecemeal processes or even of holistic or piecemeal rotation. The methodological approach used here is to identify mental-rotation strategies via gaze patterns derived from eye-tracking data when solving chronometric mental-rotation tasks with gender-stereotyped objects. The mental-rotation test consists of 3 male-stereotyped objects (locomotive, hammer, wrench) and 3 femalestereotyped objects (pram, hand mirror, brush) rotated at eight different angles. The sample consisted of 16 women and 10 men (age: M=21.58; SD=4.21). The results of a qualitative analysis with two individual objects (wrench and brush) showed four different gaze patterns. These gaze patterns appeared with different frequency in the two objects and correlated differently with performance and response time. The results indicate either an objectoriented or an egocentric mental-rotation strategy behind the gaze patterns. In general, a new methodological approach has been developed to identify mental-rotation strategies bottom-up which can also be used for other stimulus types.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"13 6","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8015812/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25568893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This note adds historical context into solving the problem of improving the speed of the step response of a low-order plant in two different types of control systems, a chemical mixing system and the human saccadic system. Two electrical engineers studied the above problem: one to understand and model how nature and evolution solved it and the other to design a control system to solve it in a man-made commercial system. David A. Robinson discovered that fast and accurate saccades were produced by a pulse-step of neural innervation applied to the extraocular plant. Leonidas M. Mantgiaris invented a method to achieve rapid and accurate chemical mixing by applying a large stimulus for a short period of time and then replacing it with the desired steady-state value (i.e., a "pulse-step" input). Thus, two humans used their brains to: 1) determine how the human brain produced human saccades; and 2) invent a control-system method to produce fast and accurate chemical mixing. That the second person came up with the same method by which his own brain was making saccades may shed light on the question of whether the human brain can fully understand itself.
本文在解决两种不同类型的控制系统(化学混合系统和人类跳跃式系统)中提高低阶装置阶跃响应速度的问题时,增加了历史背景。两位电气工程师研究了上述问题:一位理解并建立自然和进化如何解决这个问题的模型,另一位设计一个控制系统,在人造商业系统中解决这个问题。大卫·a·罗宾逊发现,快速而准确的扫视是由应用于眼外植物的神经支配的脉冲阶跃产生的。Leonidas M. Mantgiaris发明了一种方法,通过在短时间内施加大的刺激,然后用所需的稳态值(即“脉冲步进”输入)代替它,实现快速准确的化学混合。因此,两个人用他们的大脑:1)确定人类大脑如何产生人类扫视;2)发明一种控制系统方法,以实现快速准确的化学混合。第二个人提出的方法与他自己的大脑进行扫视的方法相同,这可能会对人类大脑是否能够完全理解自己的问题有所启发。
{"title":"Two Electrical Engineers, One Problem, and Evolution Produced the Same Solution: A Historical Note.","authors":"Louis F Dell'Osso","doi":"10.16910/jemr.14.1.2","DOIUrl":"https://doi.org/10.16910/jemr.14.1.2","url":null,"abstract":"<p><p>This note adds historical context into solving the problem of improving the speed of the step response of a low-order plant in two different types of control systems, a chemical mixing system and the human saccadic system. Two electrical engineers studied the above problem: one to understand and model how nature and evolution solved it and the other to design a control system to solve it in a man-made commercial system. David A. Robinson discovered that fast and accurate saccades were produced by a pulse-step of neural innervation applied to the extraocular plant. Leonidas M. Mantgiaris invented a method to achieve rapid and accurate chemical mixing by applying a large stimulus for a short period of time and then replacing it with the desired steady-state value (i.e., a \"pulse-step\" input). Thus, two humans used their brains to: 1) determine how the human brain produced human saccades; and 2) invent a control-system method to produce fast and accurate chemical mixing. That the second person came up with the same method by which his own brain was making saccades may shed light on the question of whether the human brain can fully understand itself.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"14 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8019070/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25568833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}