M. S. Arefin, J. Swan, R. C. Hoffing, Steven M. Thurman
Virtual Reality (VR) technology has advanced to include eye-tracking, allowing novel research, such as investigating how our visual system coordinates eye movements with changes in perceptual depth. The purpose of this study was to examine whether eye tracking could track perceptual depth changes during a visual discrimination task. We derived two depth-dependent variables from eye tracker data: eye vergence angle (EVA) and interpupillary distance (IPD). As hypothesized, our results revealed that shifting gaze from near-to-far depth significantly decreased EVA and increased IPD, while the opposite pattern was observed while shifting from far-to-near. Importantly, the amount of change in these variables tracked closely with relative changes in perceptual depth, and supported the hypothesis that eye tracker data may be used to infer real-time changes in perceptual depth in VR. Our method could be used as a new tool to adaptively render information based on depth and improve the VR user experience.
{"title":"Estimating Perceptual Depth Changes with Eye Vergence and Interpupillary Distance using an Eye Tracker in Virtual Reality","authors":"M. S. Arefin, J. Swan, R. C. Hoffing, Steven M. Thurman","doi":"10.1145/3517031.3529632","DOIUrl":"https://doi.org/10.1145/3517031.3529632","url":null,"abstract":"Virtual Reality (VR) technology has advanced to include eye-tracking, allowing novel research, such as investigating how our visual system coordinates eye movements with changes in perceptual depth. The purpose of this study was to examine whether eye tracking could track perceptual depth changes during a visual discrimination task. We derived two depth-dependent variables from eye tracker data: eye vergence angle (EVA) and interpupillary distance (IPD). As hypothesized, our results revealed that shifting gaze from near-to-far depth significantly decreased EVA and increased IPD, while the opposite pattern was observed while shifting from far-to-near. Importantly, the amount of change in these variables tracked closely with relative changes in perceptual depth, and supported the hypothesis that eye tracker data may be used to infer real-time changes in perceptual depth in VR. Our method could be used as a new tool to adaptively render information based on depth and improve the VR user experience.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114783084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We tested experimentally the idea of reducing the number of buttons in the gaze-based text entry system by replacing all vowels with a single diamond character, which we call super-vowel. It is inspired by historical optimizations of the written language, like Abjar. This way, the number of items on the screen was reduced, simplifying text input and allowing to make the buttons larger. However, the modification can also be a distractor that increases the number of errors. As a result of an experiment on 29 people, it turned out that in the case of non-standard methods of entering text, the modification slightly increases the speed of entering the text and reduces the number of errors. However, this does not apply to the standard keyboard, a direct transformation of physical computer keyboards with a Qwerty button layout.
{"title":"Usability of the super-vowel for gaze-based text entry","authors":"J. Matulewski, M. Patera","doi":"10.1145/3517031.3529231","DOIUrl":"https://doi.org/10.1145/3517031.3529231","url":null,"abstract":"We tested experimentally the idea of reducing the number of buttons in the gaze-based text entry system by replacing all vowels with a single diamond character, which we call super-vowel. It is inspired by historical optimizations of the written language, like Abjar. This way, the number of items on the screen was reduced, simplifying text input and allowing to make the buttons larger. However, the modification can also be a distractor that increases the number of errors. As a result of an experiment on 29 people, it turned out that in the case of non-standard methods of entering text, the modification slightly increases the speed of entering the text and reduces the number of errors. However, this does not apply to the standard keyboard, a direct transformation of physical computer keyboards with a Qwerty button layout.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126614821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Comparing the performance of new eye-tracking devices against an established benchmark is vital for identifying differences in the way eye movements are reported by each device. This paper introduces a new paired data set comprised of eye movement recordings captured simultaneously with both the EyeLink 1000—considered the “gold standard” in eye-tracking research studies—and the recently released AdHawk MindLink eye tracker. Our work presents a methodology for simultaneous data collection and a comparison of the resulting eye-tracking signal quality achieved by each device.
{"title":"SynchronEyes: A Novel, Paired Data Set of Eye Movements Recorded Simultaneously with Remote and Wearable Eye-Tracking Devices","authors":"Samantha Aziz, D. Lohr, Oleg V. Komogortsev","doi":"10.1145/3517031.3532522","DOIUrl":"https://doi.org/10.1145/3517031.3532522","url":null,"abstract":"Comparing the performance of new eye-tracking devices against an established benchmark is vital for identifying differences in the way eye movements are reported by each device. This paper introduces a new paired data set comprised of eye movement recordings captured simultaneously with both the EyeLink 1000—considered the “gold standard” in eye-tracking research studies—and the recently released AdHawk MindLink eye tracker. Our work presents a methodology for simultaneous data collection and a comparison of the resulting eye-tracking signal quality achieved by each device.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"100 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114085599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Agata Rodziewicz-Cybulska, Krzysztof Krejtz, A. Duchowski, I. Krejtz
This article evaluates the Low/High Index of Pupillary Activity (LHIPA), a measure of cognitive effort based on pupil response, in the context of reading. At the beginning of 2nd and 3rd grade, 107 children (8-9 y.o.) from music and general primary school were asked to read 40 sentences with keywords differing in length and frequency while their eye movements were recorded. Sentences with low frequency or long keywords received more attention than sentences with high frequent or short keywords. The word frequency and length effects were more pronounced in younger children. At the 2nd grade, music children dwelt less on sentences with short frequent keywords than on sentences with long frequent keywords. As expected LHIPA decreased over sentences with low frequency short keywords suggesting more cognitive effort at earlier stages of reading ability. This finding shows the utility of LHIPA as a measure of cognitive effort in education.
{"title":"Measuring Cognitive Effort with Pupillary Activity and Fixational Eye Movements When Reading: Longitudinal Comparison of Children With and Without Primary Music Education","authors":"Agata Rodziewicz-Cybulska, Krzysztof Krejtz, A. Duchowski, I. Krejtz","doi":"10.1145/3517031.3529636","DOIUrl":"https://doi.org/10.1145/3517031.3529636","url":null,"abstract":"This article evaluates the Low/High Index of Pupillary Activity (LHIPA), a measure of cognitive effort based on pupil response, in the context of reading. At the beginning of 2nd and 3rd grade, 107 children (8-9 y.o.) from music and general primary school were asked to read 40 sentences with keywords differing in length and frequency while their eye movements were recorded. Sentences with low frequency or long keywords received more attention than sentences with high frequent or short keywords. The word frequency and length effects were more pronounced in younger children. At the 2nd grade, music children dwelt less on sentences with short frequent keywords than on sentences with long frequent keywords. As expected LHIPA decreased over sentences with low frequency short keywords suggesting more cognitive effort at earlier stages of reading ability. This finding shows the utility of LHIPA as a measure of cognitive effort in education.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130662940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Saccadic eye movements are known to serve as a suitable proxy for tasks prediction. In mobile eye-tracking, saccadic events are strongly influenced by head movements. Common attempts to compensate for head-movement effects either neglect saccadic events altogether or fuse gaze and head-movement signals measured by IMUs in order to simulate the gaze signal at head-level. Using image processing techniques, we propose a solution for computing saccades based on frames of the scene-camera video. In this method, fixations are first detected based on gaze positions specified in the coordinate system of each frame, and then respective frames are merged. Lastly, pairs of consecutive fixations –forming a saccade- are projected into the coordinate system of the stitched image using the homography matrices computed by the stitching algorithm. The results show a significant difference in length between projected and original saccades, and approximately 37% of error introduced by employing saccades without head-movement consideration.
{"title":"Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking","authors":"Negar Alinaghi, Ioannis Giannopoulos","doi":"10.1145/3517031.3529624","DOIUrl":"https://doi.org/10.1145/3517031.3529624","url":null,"abstract":"Saccadic eye movements are known to serve as a suitable proxy for tasks prediction. In mobile eye-tracking, saccadic events are strongly influenced by head movements. Common attempts to compensate for head-movement effects either neglect saccadic events altogether or fuse gaze and head-movement signals measured by IMUs in order to simulate the gaze signal at head-level. Using image processing techniques, we propose a solution for computing saccades based on frames of the scene-camera video. In this method, fixations are first detected based on gaze positions specified in the coordinate system of each frame, and then respective frames are merged. Lastly, pairs of consecutive fixations –forming a saccade- are projected into the coordinate system of the stitched image using the homography matrices computed by the stitching algorithm. The results show a significant difference in length between projected and original saccades, and approximately 37% of error introduced by employing saccades without head-movement consideration.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132311142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Moritz Langner, N. Aßfalg, Peyman Toreini, A. Maedche
Surveys are a widely used method for data collection from participants. However, responding to surveys is a time consuming task and requires cognitive and physical efforts of the participants. Eye-based interactions offer the advantage of high speed pointing, low physical effort and implicitness. These advantages are already successfully leveraged in different domains, but so far not investigated in supporting participants in responding to surveys. In this paper, we present EyeLikert, a tool that enables users to answer Likert-scale questions in surveys with their eyes. EyeLikert integrates three different eye-based interactions considering the Midas Touch problem. We hypothesize that enabling eye-based interactions to fill out surveys offers the potential to reduce the physical effort, increase the speed of responding questions, and thereby reduce drop-out rates.
{"title":"EyeLikert: Eye-based Interactions for Answering Surveys","authors":"Moritz Langner, N. Aßfalg, Peyman Toreini, A. Maedche","doi":"10.1145/3517031.3529776","DOIUrl":"https://doi.org/10.1145/3517031.3529776","url":null,"abstract":"Surveys are a widely used method for data collection from participants. However, responding to surveys is a time consuming task and requires cognitive and physical efforts of the participants. Eye-based interactions offer the advantage of high speed pointing, low physical effort and implicitness. These advantages are already successfully leveraged in different domains, but so far not investigated in supporting participants in responding to surveys. In this paper, we present EyeLikert, a tool that enables users to answer Likert-scale questions in surveys with their eyes. EyeLikert integrates three different eye-based interactions considering the Midas Touch problem. We hypothesize that enabling eye-based interactions to fill out surveys offers the potential to reduce the physical effort, increase the speed of responding questions, and thereby reduce drop-out rates.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128897466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The human gaze characteristics provide informative cues on human behavior during various activities. Using traditional eye trackers, assessing gaze characteristics in the wild requires a dedicated device per participant and therefore is not feasible for large-scale experiments. In this study, we propose a commodity hardware-based multi-user eye-tracking system. We leverage the recent advancements in Deep Neural Networks and large-scale datasets for implementing our system. Our preliminary studies provide promising results for multi-user eye-tracking on commodity hardware, providing a cost-effective solution for large-scale studies.
{"title":"Multi-User Eye-Tracking","authors":"Bhanuka Mahanama","doi":"10.1145/3517031.3532197","DOIUrl":"https://doi.org/10.1145/3517031.3532197","url":null,"abstract":"The human gaze characteristics provide informative cues on human behavior during various activities. Using traditional eye trackers, assessing gaze characteristics in the wild requires a dedicated device per participant and therefore is not feasible for large-scale experiments. In this study, we propose a commodity hardware-based multi-user eye-tracking system. We leverage the recent advancements in Deep Neural Networks and large-scale datasets for implementing our system. Our preliminary studies provide promising results for multi-user eye-tracking on commodity hardware, providing a cost-effective solution for large-scale studies.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129808919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daun Kim, Jae-Yeop Jeong, Sumin Hong, Namsub Kim, Jin-Woo Jeong
Video-based online educational content has been more popular nowadays. However, due to the limited communication and interaction between the learners and instructors, various problems regarding learning performance have occurred. Gaze sharing techniques received much attention as a means to address this problem, however, there still exists a lot of room for improvement. In this work-in-progress paper, we introduce some possible improvement points regarding gaze visualization strategies and report the preliminary results of our first step towards our final goal. Through a user study with 30 university students, we found the feasibility of the prototype system and the future directions of our research.
{"title":"Visualizing Instructor’s Gaze Information for Online Video-based Learning: Preliminary Study","authors":"Daun Kim, Jae-Yeop Jeong, Sumin Hong, Namsub Kim, Jin-Woo Jeong","doi":"10.1145/3517031.3529238","DOIUrl":"https://doi.org/10.1145/3517031.3529238","url":null,"abstract":"Video-based online educational content has been more popular nowadays. However, due to the limited communication and interaction between the learners and instructors, various problems regarding learning performance have occurred. Gaze sharing techniques received much attention as a means to address this problem, however, there still exists a lot of room for improvement. In this work-in-progress paper, we introduce some possible improvement points regarding gaze visualization strategies and report the preliminary results of our first step towards our final goal. Through a user study with 30 university students, we found the feasibility of the prototype system and the future directions of our research.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116660001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mind wandering (MW) is defined as a shift of attention to task-unrelated internal thoughts that is pervasive and disruptive for learning performance. Current state-of-the-art gaze-based attention-aware intelligent systems are capable of detecting MW from eye movements and delivering interventions to mitigate its negative effects. However, the beneficial functions of MW and its trait-level tendency, defined as the content of MW experience, are still largely neglected by these systems. In this pilot study, we address the questions of whether different MW trait-level tendencies can be detected through off-screen fixations’ frequency and duration and blink rate during a lecture viewing task. We focus on prospective planning and creative problem-solving as two of the main MW trait-level tendencies. Despite the non-significance, the descriptive values show a higher frequency and duration of off-screen fixations, but lower blink rate, in the creative problem-solving MW condition. Interestingly, we do find a highly significant correlation between MW level and engagement scores in the prospective planning MW group. Potential explanations for the observed results are discussed. Overall, these findings represent a preliminary step towards the development of more accurate and adaptive learning technologies, and call for further studies on MW trait-level tendency detection.
{"title":"Mind Wandering Trait-level Tendencies During Lecture Viewing: A Pilot Study","authors":"Francesca Zermiani, A. Bulling, M. Wirzberger","doi":"10.1145/3517031.3529241","DOIUrl":"https://doi.org/10.1145/3517031.3529241","url":null,"abstract":"Mind wandering (MW) is defined as a shift of attention to task-unrelated internal thoughts that is pervasive and disruptive for learning performance. Current state-of-the-art gaze-based attention-aware intelligent systems are capable of detecting MW from eye movements and delivering interventions to mitigate its negative effects. However, the beneficial functions of MW and its trait-level tendency, defined as the content of MW experience, are still largely neglected by these systems. In this pilot study, we address the questions of whether different MW trait-level tendencies can be detected through off-screen fixations’ frequency and duration and blink rate during a lecture viewing task. We focus on prospective planning and creative problem-solving as two of the main MW trait-level tendencies. Despite the non-significance, the descriptive values show a higher frequency and duration of off-screen fixations, but lower blink rate, in the creative problem-solving MW condition. Interestingly, we do find a highly significant correlation between MW level and engagement scores in the prospective planning MW group. Potential explanations for the observed results are discussed. Overall, these findings represent a preliminary step towards the development of more accurate and adaptive learning technologies, and call for further studies on MW trait-level tendency detection.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128902757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peter A. Smith, Matt Dombrowski, Shea McLinden, Calvin MacDonald, Devon Lynn, John Sparkman, Dominique Courbin, Albert Manero
Maintaining autonomous activities can be challenging for patients with neuromuscular disorders or quadriplegia, where control of joysticks for powered wheelchairs may not be feasible. Advancements in human machine interfaces have resulted in methods to capture the intentionality of the individual through non-traditional controls and communicating the users desires to a robotic interface. This research explores the design of a training game that teaches users to control a wheelchair through such a device that utilizes electromyography (EMG). The training game combines the use of EMG and eye tracking to enhance the impression of dignity while building self-efficacy and supporting autonomy for users. The system implements both eye tracking and surface electromyography, via the temporalis muscles, for gamified training and simulation of a novel wheelchair interface.
{"title":"Advancing dignity for adaptive wheelchair users via a hybrid eye tracking and electromyography training game","authors":"Peter A. Smith, Matt Dombrowski, Shea McLinden, Calvin MacDonald, Devon Lynn, John Sparkman, Dominique Courbin, Albert Manero","doi":"10.1145/3517031.3529612","DOIUrl":"https://doi.org/10.1145/3517031.3529612","url":null,"abstract":"Maintaining autonomous activities can be challenging for patients with neuromuscular disorders or quadriplegia, where control of joysticks for powered wheelchairs may not be feasible. Advancements in human machine interfaces have resulted in methods to capture the intentionality of the individual through non-traditional controls and communicating the users desires to a robotic interface. This research explores the design of a training game that teaches users to control a wheelchair through such a device that utilizes electromyography (EMG). The training game combines the use of EMG and eye tracking to enhance the impression of dignity while building self-efficacy and supporting autonomy for users. The system implements both eye tracking and surface electromyography, via the temporalis muscles, for gamified training and simulation of a novel wheelchair interface.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114216597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}