Florian Hauser, Lisa Grabinger, Timur Ezer, J. Mottok, Hans Gruber
{"title":"Analyzing and Interpreting Eye Movements in C++: Using Holistic Models of Image Perception","authors":"Florian Hauser, Lisa Grabinger, Timur Ezer, J. Mottok, Hans Gruber","doi":"10.1145/3649902.3655093","DOIUrl":"https://doi.org/10.1145/3649902.3655093","url":null,"abstract":"","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"6 6","pages":"72:1-72:7"},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141266582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The role of stress in silent reading","authors":"Kristina Cergol, M. Palmović","doi":"10.1145/3649902.3656492","DOIUrl":"https://doi.org/10.1145/3649902.3656492","url":null,"abstract":"","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"3 12","pages":"83:1-83:5"},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141266654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Chung, F. Deligianni, Xiao-Peng Hu, Guang-Zhong Yang
This paper presents a new technique for extracting visual saliency from experimental eye tracking data. An eye-tracking system is employed to determine which features that a group of human observers considered to be salient when viewing a set of video images. With this information, a biologically inspired saliency map is derived by transforming each observed video image into a feature space representation. By using a feature normalisation process based on the relative abundance of visual features within the background image and those dwelled on eye tracking scan paths, features related to visual attention are determined. These features are then back projected to the image domain to determine spatial areas of interest for unseen video images. The strengths and weaknesses of the method are demonstrated with feature correspondence for 2D to 3D image registration of endoscopy videos with computed tomography data. The biologically derived saliency map is employed to provide an image similarity measure that forms the heart of the 2D/3D registration method. It is shown that by only processing selective regions of interest as determined by the saliency map, rendering overhead can be greatly reduced. Significant improvements in pose estimation efficiency can be achieved without apparent reduction in registration accuracy when compared to that of using a non-saliency based similarity measure.
{"title":"Visual feature extraction via eye tracking for saliency driven 2D/3D registration","authors":"A. Chung, F. Deligianni, Xiao-Peng Hu, Guang-Zhong Yang","doi":"10.1145/968363.968371","DOIUrl":"https://doi.org/10.1145/968363.968371","url":null,"abstract":"This paper presents a new technique for extracting visual saliency from experimental eye tracking data. An eye-tracking system is employed to determine which features that a group of human observers considered to be salient when viewing a set of video images. With this information, a biologically inspired saliency map is derived by transforming each observed video image into a feature space representation. By using a feature normalisation process based on the relative abundance of visual features within the background image and those dwelled on eye tracking scan paths, features related to visual attention are determined. These features are then back projected to the image domain to determine spatial areas of interest for unseen video images. The strengths and weaknesses of the method are demonstrated with feature correspondence for 2D to 3D image registration of endoscopy videos with computed tomography data. The biologically derived saliency map is employed to provide an image similarity measure that forms the heart of the 2D/3D registration method. It is shown that by only processing selective regions of interest as determined by the saliency map, rendering overhead can be greatly reduced. Significant improvements in pose estimation efficiency can be achieved without apparent reduction in registration accuracy when compared to that of using a non-saliency based similarity measure.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123587953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cognitive models and empirical studies of problem solving in visuo-spatial and causal domains suggest that problem solving tasks in such domains invoke cognitive processes involving mental animation and imagery. If these internal processes are externally manifested in the form of eye movements, such tasks present situations in which the trajectory of a user's visual attention can provide clues regarding his or her information needs to an Attentive User Interface [Vertegaal 2002]. In this paper, we briefly review research related to problem solving that involves mental imagery, and describe an experiment that looked for evidence and effects of an imagery strategy in problem solving. We eye-tracked 90 subjects solving two causal reasoning problems, one in which a diagram of the problem appeared on the stimulus display, and a second related problem that was posed on a blank display. Results indicated that 42% of the subjects employed mental imagery and visually scanned the display in a correspondingly systematic fashion. This suggests that information displays that respond to a user's visual attention trajectory, a kind of Attentive User Interface, are more likely to benefit this class of users.
{"title":"Mental imagery in problem solving: an eye tracking study","authors":"Daesub Yoon, N. Hari Narayanan","doi":"10.1145/968363.968382","DOIUrl":"https://doi.org/10.1145/968363.968382","url":null,"abstract":"Cognitive models and empirical studies of problem solving in visuo-spatial and causal domains suggest that problem solving tasks in such domains invoke cognitive processes involving mental animation and imagery. If these internal processes are externally manifested in the form of eye movements, such tasks present situations in which the trajectory of a user's visual attention can provide clues regarding his or her information needs to an Attentive User Interface [Vertegaal 2002]. In this paper, we briefly review research related to problem solving that involves mental imagery, and describe an experiment that looked for evidence and effects of an imagery strategy in problem solving. We eye-tracked 90 subjects solving two causal reasoning problems, one in which a diagram of the problem appeared on the stimulus display, and a second related problem that was posed on a blank display. Results indicated that 42% of the subjects employed mental imagery and visually scanned the display in a correspondingly systematic fashion. This suggests that information displays that respond to a user's visual attention trajectory, a kind of Attentive User Interface, are more likely to benefit this class of users.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122648496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eye typing provides means of communication especially for people with severe disabilities. Recent research indicates that the type of feedback impacts typing speed, error rate, and the user's need to switch her gaze between the on-screen keyboard and the typed text field. The current study focuses on the issues of feedback when a short dwell time (450 ms vs. 900 ms in a previous study) is used. Results show that the findings obtained using longer dwell times only partly apply for shorter dwell times. For example, with a short dwell time, spoken feedback results in slower text entry speed and double entry errors. A short dwell time requires sharp and clear feedback that supports the typing rhythm.
{"title":"Effects of feedback on eye typing with a short dwell time","authors":"P. Majaranta, A. Aula, Kari-Jouko Räihä","doi":"10.1145/968363.968390","DOIUrl":"https://doi.org/10.1145/968363.968390","url":null,"abstract":"Eye typing provides means of communication especially for people with severe disabilities. Recent research indicates that the type of feedback impacts typing speed, error rate, and the user's need to switch her gaze between the on-screen keyboard and the typed text field. The current study focuses on the issues of feedback when a short dwell time (450 ms vs. 900 ms in a previous study) is used. Results show that the findings obtained using longer dwell times only partly apply for shorter dwell times. For example, with a short dwell time, spoken feedback results in slower text entry speed and double entry errors. A short dwell time requires sharp and clear feedback that supports the typing rhythm.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128297745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander W. Skaburskis, Roel Vertegaal, Jeffrey S. Shell
As ubiquitous computing becomes more prevalent, greater consideration will have to be taken on how devices interrupt us and vie for our attention. This paper describes Auramirror, an interactive art piece that raises questions of how computers use our attention. By measuring attention and visualizing the results for the audience in real-time, Auramirror brings the subject matter to the forefront of the audience's consideration. Finally, some ways of using the Auramirror system to help in the design of attention sensitive devices are discussed.
{"title":"Auramirror: reflections on attention","authors":"Alexander W. Skaburskis, Roel Vertegaal, Jeffrey S. Shell","doi":"10.1145/968363.968385","DOIUrl":"https://doi.org/10.1145/968363.968385","url":null,"abstract":"As ubiquitous computing becomes more prevalent, greater consideration will have to be taken on how devices interrupt us and vie for our attention. This paper describes Auramirror, an interactive art piece that raises questions of how computers use our attention. By measuring attention and visualizing the results for the audience in real-time, Auramirror brings the subject matter to the forefront of the audience's consideration. Finally, some ways of using the Auramirror system to help in the design of attention sensitive devices are discussed.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129731400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes the influence of eye blinks on frequency analysis and power spectrum difference for task-evoked pupillography and eye-movement during an experiment which consisted of ocular following tasks and oral calculation tasks with three levels of task difficulty: control, 1×1,and 1×2 digit oral calculation.The compensation model for temporal pupil size based on MLP (multi layer perceptron) was trained to detect a blink and to estimate pupil size by using blinkless pupillary change and artificial blink patterns. The PSD (power spectrum density) measurements from the estimated pupillography during oral calculation tasks show significant differences, and the PSD increased with task difficulty in the area of 0.1 - 0.5Hz and 1.6 - 3.5Hz, as did the average pupil size.The eye-movement during blinks was corrected manually, to remove irregular eye-movements such as saccades. The CSD (cross spectrum density) was achieved from horizontal and vertical eye-movement coordinates. Significant differences in CSDs among experimental conditions were examined in the area of 0.6 - 1.5 Hz. These differences suggest that the task difficulty affects the relationship between horizontal and vertical eye-movement coordinates in the frequency domain.
{"title":"Frequency analysis of task evoked pupillary response and eye-movement","authors":"M. Nakayama, Y. Shimizu","doi":"10.1145/968363.968381","DOIUrl":"https://doi.org/10.1145/968363.968381","url":null,"abstract":"This paper describes the influence of eye blinks on frequency analysis and power spectrum difference for task-evoked pupillography and eye-movement during an experiment which consisted of ocular following tasks and oral calculation tasks with three levels of task difficulty: control, 1×1,and 1×2 digit oral calculation.The compensation model for temporal pupil size based on MLP (multi layer perceptron) was trained to detect a blink and to estimate pupil size by using blinkless pupillary change and artificial blink patterns. The PSD (power spectrum density) measurements from the estimated pupillography during oral calculation tasks show significant differences, and the PSD increased with task difficulty in the area of 0.1 - 0.5Hz and 1.6 - 3.5Hz, as did the average pupil size.The eye-movement during blinks was corrected manually, to remove irregular eye-movements such as saccades. The CSD (cross spectrum density) was achieved from horizontal and vertical eye-movement coordinates. Significant differences in CSDs among experimental conditions were examined in the area of 0.6 - 1.5 Hz. These differences suggest that the task difficulty affects the relationship between horizontal and vertical eye-movement coordinates in the frequency domain.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115904676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Calibration is one of the most tedious and often annoying aspects of many eye tracking systems. It normally consists in looking at several marks on a screen in order to collect enough data to modify the parameters of an adjustable model. Unfortunately this step is unavoidable if a competent tracking system is desired. Many efforts have been made to achieve more competent and improved eye tracking systems. Maybe the search for an accurate mathematical model is one of the least researched fields. The lack of a parametric description of the gaze estimation problem makes it difficult to find the most suitable model, and therefore generic expressions in calibration and tracking sessions are employed instead. In other words, a model based on parameters describing the elements involved in the tracking system would provide a stronger basis and robustness. The aim of this work is to build up a mathematical model totally based in realistic variables describing elements taking part in an eye tracking system employing the well known bright pupil technique i.e. user, camera, illumination and screen. The model is said to be defined when the expression relating the point the user is looking at with the extracted features of the image (glint position and center of the pupil) is found. The desired model would have to be simple, realistic, accurate and easy to calibrate.
{"title":"Eye tracking system model with easy calibration","authors":"A. Villanueva, R. Cabeza, Sonia Porta","doi":"10.1145/968363.968372","DOIUrl":"https://doi.org/10.1145/968363.968372","url":null,"abstract":"Calibration is one of the most tedious and often annoying aspects of many eye tracking systems. It normally consists in looking at several marks on a screen in order to collect enough data to modify the parameters of an adjustable model. Unfortunately this step is unavoidable if a competent tracking system is desired. Many efforts have been made to achieve more competent and improved eye tracking systems. Maybe the search for an accurate mathematical model is one of the least researched fields. The lack of a parametric description of the gaze estimation problem makes it difficult to find the most suitable model, and therefore generic expressions in calibration and tracking sessions are employed instead. In other words, a model based on parameters describing the elements involved in the tracking system would provide a stronger basis and robustness. The aim of this work is to build up a mathematical model totally based in realistic variables describing elements taking part in an eye tracking system employing the well known bright pupil technique i.e. user, camera, illumination and screen. The model is said to be defined when the expression relating the point the user is looking at with the extracted features of the image (glint position and center of the pupil) is found. The desired model would have to be simple, realistic, accurate and easy to calibrate.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128899080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Some historical issues regarding the use of eye movements to study cognitive processes will initially be discussed. The development of eye contingent display change experiments will be reviewed and examples will be presented regarding how the development of the technique provided answers to interesting questions. For the most part, examples will be taken from the psychology of reading, but other tasks will also be discussed. More recently, sophisticated models of eye movement control in the context of reading have been developed, and these models will be discussed. Some thoughts on future directions of eye movement research will also be presented.
{"title":"Eye movements as reflections of perceptual and cognitive processes (abstract only)","authors":"K. Rayner","doi":"10.1145/968363.968365","DOIUrl":"https://doi.org/10.1145/968363.968365","url":null,"abstract":"Some historical issues regarding the use of eye movements to study cognitive processes will initially be discussed. The development of eye contingent display change experiments will be reviewed and examples will be presented regarding how the development of the technique provided answers to interesting questions. For the most part, examples will be taken from the psychology of reading, but other tasks will also be discussed. More recently, sophisticated models of eye movement control in the context of reading have been developed, and these models will be discussed. Some thoughts on future directions of eye movement research will also be presented.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"257 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121584552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}