Clare Teng, Lok Hin Lee, Jayne Lander, Lior Drukker, Aris T Papageorghiou, Alison J Noble
We present a method for skill characterisation of sonographer gaze patterns while performing routine second trimester fetal anatomy ultrasound scans. The position and scale of fetal anatomical planes during each scan differ because of fetal position, movements and sonographer skill. A standardised reference is required to compare recorded eye-tracking data for skill characterisation. We propose using an affine transformer network to localise the anatomy circumference in video frames, for normalisation of eye-tracking data. We use an event-based data visualisation, time curves, to characterise sonographer scanning patterns. We chose brain and heart anatomical planes because they vary in levels of gaze complexity. Our results show that when sonographers search for the same anatomical plane, even though the landmarks visited are similar, their time curves display different visual patterns. Brain planes also, on average, have more events or landmarks occurring than the heart, which highlights anatomy-specific differences in searching approaches.
{"title":"Skill Characterisation of Sonographer Gaze Patterns during Second Trimester Clinical Fetal Ultrasounds using Time Curves.","authors":"Clare Teng, Lok Hin Lee, Jayne Lander, Lior Drukker, Aris T Papageorghiou, Alison J Noble","doi":"10.1145/3517031.3529637","DOIUrl":"https://doi.org/10.1145/3517031.3529637","url":null,"abstract":"<p><p>We present a method for skill characterisation of sonographer gaze patterns while performing routine second trimester fetal anatomy ultrasound scans. The position and scale of fetal anatomical planes during each scan differ because of fetal position, movements and sonographer skill. A standardised reference is required to compare recorded eye-tracking data for skill characterisation. We propose using an affine transformer network to localise the anatomy circumference in video frames, for normalisation of eye-tracking data. We use an event-based data visualisation, time curves, to characterise sonographer scanning patterns. We chose brain and heart anatomical planes because they vary in levels of gaze complexity. Our results show that when sonographers search for the same anatomical plane, even though the landmarks visited are similar, their time curves display different visual patterns. Brain planes also, on average, have more events or landmarks occurring than the heart, which highlights anatomy-specific differences in searching approaches.</p>","PeriodicalId":74558,"journal":{"name":"Proceedings. Eye Tracking Research & Applications Symposium","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7614191/pdf/EMS159394.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9930156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visualising patterns in clinicians' eye movements while interpreting fetal ultrasound imaging videos is challenging. Across and within videos, there are differences in size an d position of Areas-of-Interest (AOIs) due to fetal position, movement and sonographer skill. Currently, AOIs are manually labelled or identified using eye-tracker manufacturer specifications which are not study specific. We propose using unsupervised clustering to identify meaningful AOIs and bi-contour plots to visualise spatio-temporal gaze characteristics. We use Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) to identify the AOIs, and use their corresponding images to capture granular changes within each AOI. Then we visualise transitions within and between AOIs as read by the sonographer. We compare our method to a standardised eye-tracking manufacturer algorithm. Our method captures granular changes in gaze characteristics which are otherwise not shown. Our method is suitable for exploratory data analysis of eye-tracking data involving multiple participants and AOIs.
{"title":"Visualising Spatio-Temporal Gaze Characteristics for Exploratory Data Analysis in Clinical Fetal Ultrasound Scans.","authors":"Clare Teng, Harshita Sharma, Lior Drukker, Aris T Papageorghiou, Alison J Noble","doi":"10.1145/3517031.3529635","DOIUrl":"https://doi.org/10.1145/3517031.3529635","url":null,"abstract":"<p><p>Visualising patterns in clinicians' eye movements while interpreting fetal ultrasound imaging videos is challenging. Across and within videos, there are differences in size an d position of Areas-of-Interest (AOIs) due to fetal position, movement and sonographer skill. Currently, AOIs are manually labelled or identified using eye-tracker manufacturer specifications which are not study specific. We propose using unsupervised clustering to identify meaningful AOIs and bi-contour plots to visualise spatio-temporal gaze characteristics. We use Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) to identify the AOIs, and use their corresponding images to capture granular changes within each AOI. Then we visualise transitions within and between AOIs as read by the sonographer. We compare our method to a standardised eye-tracking manufacturer algorithm. Our method captures granular changes in gaze characteristics which are otherwise not shown. Our method is suitable for exploratory data analysis of eye-tracking data involving multiple participants and AOIs.</p>","PeriodicalId":74558,"journal":{"name":"Proceedings. Eye Tracking Research & Applications Symposium","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7614061/pdf/EMS159392.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9558055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haylie L Miller, Ian R Zurutuza, Nicholas E Fears, Suleyman O Polat, Rodney D Nielsen
Mobile eye-tracking and motion-capture techniques yield rich, precisely quantifiable data that can inform our understanding of the relationship between visual and motor processes during task performance. However, these systems are rarely used in combination, in part because of the significant time and human resources required for post-processing and analysis. Recent advances in computer vision have opened the door for more efficient processing and analysis solutions. We developed a post-processing pipeline to integrate mobile eye-tracking and full-body motion-capture data. These systems were used simultaneously to measure visuomotor integration in an immersive virtual environment. Our approach enables calculation of a 3D gaze vector that can be mapped to the participant's body position and objects in the virtual environment using a uniform coordinate system. This approach is generalizable to other configurations, and enables more efficient analysis of eye, head, and body movements together during visuomotor tasks administered in controlled, repeatable environments.
{"title":"Post-processing integration and semi-automated analysis of eye-tracking and motion-capture data obtained in immersive virtual reality environments to measure visuomotor integration.","authors":"Haylie L Miller, Ian R Zurutuza, Nicholas E Fears, Suleyman O Polat, Rodney D Nielsen","doi":"10.1145/3450341.3458881","DOIUrl":"10.1145/3450341.3458881","url":null,"abstract":"<p><p>Mobile eye-tracking and motion-capture techniques yield rich, precisely quantifiable data that can inform our understanding of the relationship between visual and motor processes during task performance. However, these systems are rarely used in combination, in part because of the significant time and human resources required for post-processing and analysis. Recent advances in computer vision have opened the door for more efficient processing and analysis solutions. We developed a post-processing pipeline to integrate mobile eye-tracking and full-body motion-capture data. These systems were used simultaneously to measure visuomotor integration in an immersive virtual environment. Our approach enables calculation of a 3D gaze vector that can be mapped to the participant's body position and objects in the virtual environment using a uniform coordinate system. This approach is generalizable to other configurations, and enables more efficient analysis of eye, head, and body movements together during visuomotor tasks administered in controlled, repeatable environments.</p>","PeriodicalId":74558,"journal":{"name":"Proceedings. Eye Tracking Research & Applications Symposium","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8276594/pdf/nihms-1718937.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39185504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People with amblyopia have been shown to have decreased fixational stability, particularly those with strabismic amblyopia. Fixational stability and visual acuity have been shown to be tightly correlated across multiple studies, suggesting a relationship between acuity and oculomotor stability. Reduced visual acuity is the sine qua non of amblyopia, and recovery is measured by the improvement in visual acuity. Here we ask whether fixational stability can be used as an objective marker for the recovery of visual function in amblyopia. We tracked children's fixational stability during patching treatment over time and found fixational stability changes alongside improvements in visual acuity. This suggests fixational stability can be used as an objective measure for monitoring treatment in amblyopia and other disorders.
{"title":"Fixational stability as a measure for the recovery of visual function in amblyopia.","authors":"Avi M Aizenman, Dennis M Levi","doi":"10.1145/3450341.3458493","DOIUrl":"https://doi.org/10.1145/3450341.3458493","url":null,"abstract":"<p><p>People with amblyopia have been shown to have decreased fixational stability, particularly those with strabismic amblyopia. Fixational stability and visual acuity have been shown to be tightly correlated across multiple studies, suggesting a relationship between acuity and oculomotor stability. Reduced visual acuity is the sine qua non of amblyopia, and recovery is measured by the improvement in visual acuity. Here we ask whether fixational stability can be used as an objective marker for the recovery of visual function in amblyopia. We tracked children's fixational stability during patching treatment over time and found fixational stability changes alongside improvements in visual acuity. This suggests fixational stability can be used as an objective measure for monitoring treatment in amblyopia and other disorders.</p>","PeriodicalId":74558,"journal":{"name":"Proceedings. Eye Tracking Research & Applications Symposium","volume":"2021 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3450341.3458493","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9136268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peter Hausamann, Christian Sinnott, Paul R MacNeilage
Simultaneous head and eye tracking has traditionally been confined to a laboratory setting and real-world motion tracking limited to measuring linear acceleration and angular velocity. Recently available mobile devices such as the Pupil Core eye tracker and the Intel RealSense T265 motion tracker promise to deliver accurate measurements outside the lab. Here, the researchers propose a hard- and software framework that combines both devices into a robust, usable, low-cost head and eye tracking system. The developed software is open source and the required hardware modifications can be 3D printed. The researchers demonstrate the system's ability to measure head and eye movements in two tasks: an eyes-fixed head rotation task eliciting the vestibulo-ocular reflex inside the laboratory, and a natural locomotion task where a subject walks around a building outside of the laboratory. The resultant head and eye movements are discussed, as well as future implementations of this system.
{"title":"Positional head-eye tracking outside the lab: an open-source solution.","authors":"Peter Hausamann, Christian Sinnott, Paul R MacNeilage","doi":"10.1145/3379156.3391365","DOIUrl":"https://doi.org/10.1145/3379156.3391365","url":null,"abstract":"<p><p>Simultaneous head and eye tracking has traditionally been confined to a laboratory setting and real-world motion tracking limited to measuring linear acceleration and angular velocity. Recently available mobile devices such as the Pupil Core eye tracker and the Intel RealSense T265 motion tracker promise to deliver accurate measurements outside the lab. Here, the researchers propose a hard- and software framework that combines both devices into a robust, usable, low-cost head and eye tracking system. The developed software is open source and the required hardware modifications can be 3D printed. The researchers demonstrate the system's ability to measure head and eye movements in two tasks: an eyes-fixed head rotation task eliciting the vestibulo-ocular reflex inside the laboratory, and a natural locomotion task where a subject walks around a building outside of the laboratory. The resultant head and eye movements are discussed, as well as future implementations of this system.</p>","PeriodicalId":74558,"journal":{"name":"Proceedings. Eye Tracking Research & Applications Symposium","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3379156.3391365","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25530532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Isayas B Adhanom, Samantha C Lee, Eelke Folmer, Paul MacNeilage
As virtual reality (VR) garners more attention for eye tracking research, knowledge of accuracy and precision of head-mounted display (HMD) based eye trackers becomes increasingly necessary. It is tempting to rely on manufacturer-provided information about the accuracy and precision of an eye tracker. However, unless data is collected under ideal conditions, these values seldom align with on-site metrics. Therefore, best practices dictate that accuracy and precision should be measured and reported for each study. To address this issue, we provide a novel open-source suite for rigorously measuring accuracy and precision for use with a variety of HMD-based eye trackers. This tool is customizable without having to alter the source code, but changes to the code allow for further alteration. The outputs are available in real time and easy to interpret, making eye tracking with VR more approachable for all users.
{"title":"GazeMetrics: An Open-Source Tool for Measuring the Data Quality of HMD-based Eye Trackers.","authors":"Isayas B Adhanom, Samantha C Lee, Eelke Folmer, Paul MacNeilage","doi":"10.1145/3379156.3391374","DOIUrl":"https://doi.org/10.1145/3379156.3391374","url":null,"abstract":"As virtual reality (VR) garners more attention for eye tracking research, knowledge of accuracy and precision of head-mounted display (HMD) based eye trackers becomes increasingly necessary. It is tempting to rely on manufacturer-provided information about the accuracy and precision of an eye tracker. However, unless data is collected under ideal conditions, these values seldom align with on-site metrics. Therefore, best practices dictate that accuracy and precision should be measured and reported for each study. To address this issue, we provide a novel open-source suite for rigorously measuring accuracy and precision for use with a variety of HMD-based eye trackers. This tool is customizable without having to alter the source code, but changes to the code allow for further alteration. The outputs are available in real time and easy to interpret, making eye tracking with VR more approachable for all users.","PeriodicalId":74558,"journal":{"name":"Proceedings. Eye Tracking Research & Applications Symposium","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3379156.3391374","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25537137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Addison Mayberry, Yamin Tun, Pan Hu, Duncan Smith-Freedman, Benjamin Marlin, Christopher Salthouse, Deepak Ganesan
The human eye offers a fascinating window into an individual's health, cognitive attention, and decision making, but we lack the ability to continually measure these parameters in the natural environment. We demonstrate CIDER, a system that operates in a highly optimized low-power mode under indoor settings by using a fast Search-Refine controller to track the eye, but detects when the environment switches to more challenging outdoor sunlight and switches models to operate robustly under this condition. Our design is holistic and tackles a) power consumption in digitizing pixels, estimating pupillary parameters, and illuminating the eye via near-infrared and b) error in estimating pupil center and pupil dilation. We demonstrate that CIDER can estimate pupil center with error less than two pixels (0.6°), and pupil diameter with error of one pixel (0.22mm). Our end-to-end results show that we can operate at power levels of roughly 7mW at a 4Hz eye tracking rate, or roughly 32mW at rates upwards of 250Hz.
{"title":"CIDER: Enhancing the Performance of Computational Eyeglasses.","authors":"Addison Mayberry, Yamin Tun, Pan Hu, Duncan Smith-Freedman, Benjamin Marlin, Christopher Salthouse, Deepak Ganesan","doi":"10.1145/2857491.2884063","DOIUrl":"https://doi.org/10.1145/2857491.2884063","url":null,"abstract":"<p><p>The human eye offers a fascinating window into an individual's health, cognitive attention, and decision making, but we lack the ability to continually measure these parameters in the natural environment. We demonstrate CIDER, a system that operates in a highly optimized low-power mode under indoor settings by using a fast Search-Refine controller to track the eye, but detects when the environment switches to more challenging outdoor sunlight and switches models to operate robustly under this condition. Our design is holistic and tackles a) power consumption in digitizing pixels, estimating pupillary parameters, and illuminating the eye via near-infrared and b) error in estimating pupil center and pupil dilation. We demonstrate that CIDER can estimate pupil center with error less than two pixels (0.6°), and pupil diameter with error of one pixel (0.22mm). Our end-to-end results show that we can operate at power levels of roughly 7mW at a 4Hz eye tracking rate, or roughly 32mW at rates upwards of 250Hz.</p>","PeriodicalId":74558,"journal":{"name":"Proceedings. Eye Tracking Research & Applications Symposium","volume":"2016 ","pages":"313-314"},"PeriodicalIF":0.0,"publicationDate":"2016-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/2857491.2884063","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35986484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quan Wang, Elizabeth Kim, Katarzyna Chawarska, Brian Scassellati, Steven Zucker, Frederick Shic
Fixation identification algorithms facilitate data comprehension and provide analytical convenience in eye-tracking analysis. However, current fixation algorithms for eye-tracking analysis are heavily dependent on parameter choices, leading to instabilities in results and incompleteness in reporting. This work examines the nature of human scanning patterns during complex scene viewing. We show that standard implementations of the commonly used distance-dispersion algorithm for fixation identification are functionally equivalent to greedy spatiotemporal tiling. We show that modeling the number of fixations as a function of tiling size leads to a measure of fractal dimensionality through box counting. We apply this technique to examine scale-free gaze behaviors in toddlers and adults looking at images of faces and blocks, as well as large number of adults looking at movies or static images. The distributional aspects of the number of fixations may suggest a fractal structure to gaze patterns in free scanning and imply that the incompleteness of standard algorithms may be due to the scale-free behaviors of the underlying scanning distributions. We discuss the nature of this hypothesis, its limitations, and offer directions for future work.
{"title":"On Relationships Between Fixation Identification Algorithms and Fractal Box Counting Methods.","authors":"Quan Wang, Elizabeth Kim, Katarzyna Chawarska, Brian Scassellati, Steven Zucker, Frederick Shic","doi":"10.1145/2578153.2578161","DOIUrl":"https://doi.org/10.1145/2578153.2578161","url":null,"abstract":"<p><p>Fixation identification algorithms facilitate data comprehension and provide analytical convenience in eye-tracking analysis. However, current fixation algorithms for eye-tracking analysis are heavily dependent on parameter choices, leading to instabilities in results and incompleteness in reporting. This work examines the nature of human scanning patterns during complex scene viewing. We show that standard implementations of the commonly used distance-dispersion algorithm for fixation identification are functionally equivalent to greedy spatiotemporal tiling. We show that modeling the number of fixations as a function of tiling size leads to a measure of fractal dimensionality through box counting. We apply this technique to examine scale-free gaze behaviors in toddlers and adults looking at images of faces and blocks, as well as large number of adults looking at movies or static images. The distributional aspects of the number of fixations may suggest a fractal structure to gaze patterns in free scanning and imply that the incompleteness of standard algorithms may be due to the scale-free behaviors of the underlying scanning distributions. We discuss the nature of this hypothesis, its limitations, and offer directions for future work.</p>","PeriodicalId":74558,"journal":{"name":"Proceedings. Eye Tracking Research & Applications Symposium","volume":"2014 ","pages":"67-74"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/2578153.2578161","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34120805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}