首页 > 最新文献

Proceedings. Eye Tracking Research & Applications Symposium最新文献

英文 中文
Skill Characterisation of Sonographer Gaze Patterns during Second Trimester Clinical Fetal Ultrasounds using Time Curves. 使用时间曲线的妊娠中期超声医师注视模式的技能特征。
Pub Date : 2022-06-01 DOI: 10.1145/3517031.3529637
Clare Teng, Lok Hin Lee, Jayne Lander, Lior Drukker, Aris T Papageorghiou, Alison J Noble

We present a method for skill characterisation of sonographer gaze patterns while performing routine second trimester fetal anatomy ultrasound scans. The position and scale of fetal anatomical planes during each scan differ because of fetal position, movements and sonographer skill. A standardised reference is required to compare recorded eye-tracking data for skill characterisation. We propose using an affine transformer network to localise the anatomy circumference in video frames, for normalisation of eye-tracking data. We use an event-based data visualisation, time curves, to characterise sonographer scanning patterns. We chose brain and heart anatomical planes because they vary in levels of gaze complexity. Our results show that when sonographers search for the same anatomical plane, even though the landmarks visited are similar, their time curves display different visual patterns. Brain planes also, on average, have more events or landmarks occurring than the heart, which highlights anatomy-specific differences in searching approaches.

我们提出了一种技能表征超声医师凝视模式,同时进行常规中期妊娠胎儿解剖超声扫描。每次扫描时胎儿解剖平面的位置和尺度因胎儿位置、运动和超声技术的不同而不同。需要一个标准化的参考来比较记录的眼球追踪数据,以确定技能特征。我们建议使用仿射变压器网络来定位视频帧中的解剖周长,以实现眼动追踪数据的规范化。我们使用基于事件的数据可视化,时间曲线,来表征超声扫描模式。我们之所以选择大脑和心脏解剖平面,是因为它们的凝视复杂性水平不同。我们的研究结果表明,当超声检查者搜索同一解剖平面时,即使所访问的地标相似,他们的时间曲线显示不同的视觉模式。平均而言,脑平面比心脏有更多的事件或里程碑发生,这突出了搜索方法的解剖特异性差异。
{"title":"Skill Characterisation of Sonographer Gaze Patterns during Second Trimester Clinical Fetal Ultrasounds using Time Curves.","authors":"Clare Teng,&nbsp;Lok Hin Lee,&nbsp;Jayne Lander,&nbsp;Lior Drukker,&nbsp;Aris T Papageorghiou,&nbsp;Alison J Noble","doi":"10.1145/3517031.3529637","DOIUrl":"https://doi.org/10.1145/3517031.3529637","url":null,"abstract":"<p><p>We present a method for skill characterisation of sonographer gaze patterns while performing routine second trimester fetal anatomy ultrasound scans. The position and scale of fetal anatomical planes during each scan differ because of fetal position, movements and sonographer skill. A standardised reference is required to compare recorded eye-tracking data for skill characterisation. We propose using an affine transformer network to localise the anatomy circumference in video frames, for normalisation of eye-tracking data. We use an event-based data visualisation, time curves, to characterise sonographer scanning patterns. We chose brain and heart anatomical planes because they vary in levels of gaze complexity. Our results show that when sonographers search for the same anatomical plane, even though the landmarks visited are similar, their time curves display different visual patterns. Brain planes also, on average, have more events or landmarks occurring than the heart, which highlights anatomy-specific differences in searching approaches.</p>","PeriodicalId":74558,"journal":{"name":"Proceedings. Eye Tracking Research & Applications Symposium","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7614191/pdf/EMS159394.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9930156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualising Spatio-Temporal Gaze Characteristics for Exploratory Data Analysis in Clinical Fetal Ultrasound Scans. 可视化时空凝视特征用于临床胎儿超声扫描的探索性数据分析。
Pub Date : 2022-06-01 DOI: 10.1145/3517031.3529635
Clare Teng, Harshita Sharma, Lior Drukker, Aris T Papageorghiou, Alison J Noble

Visualising patterns in clinicians' eye movements while interpreting fetal ultrasound imaging videos is challenging. Across and within videos, there are differences in size an d position of Areas-of-Interest (AOIs) due to fetal position, movement and sonographer skill. Currently, AOIs are manually labelled or identified using eye-tracker manufacturer specifications which are not study specific. We propose using unsupervised clustering to identify meaningful AOIs and bi-contour plots to visualise spatio-temporal gaze characteristics. We use Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) to identify the AOIs, and use their corresponding images to capture granular changes within each AOI. Then we visualise transitions within and between AOIs as read by the sonographer. We compare our method to a standardised eye-tracking manufacturer algorithm. Our method captures granular changes in gaze characteristics which are otherwise not shown. Our method is suitable for exploratory data analysis of eye-tracking data involving multiple participants and AOIs.

可视化模式在临床医生的眼球运动,同时解释胎儿超声成像视频是具有挑战性的。由于胎儿的位置、运动和超声技术的不同,在视频之间和视频内,感兴趣区域(aoi)的大小和位置存在差异。目前,aoi是手动标记或识别使用眼动仪制造商的规格,而不是研究特定的。我们建议使用无监督聚类来识别有意义的aoi和双轮廓图来可视化时空凝视特征。我们使用基于层次密度的带噪声应用空间聚类(HDBSCAN)来识别AOI,并使用其相应的图像来捕获每个AOI内的颗粒变化。然后我们可视化超声师读取的aoi内部和aoi之间的转换。我们将我们的方法与标准化的眼动追踪制造商算法进行比较。我们的方法捕获了注视特征的颗粒变化,否则这些变化不会显示出来。我们的方法适用于涉及多个参与者和aoi的眼动追踪数据的探索性数据分析。
{"title":"Visualising Spatio-Temporal Gaze Characteristics for Exploratory Data Analysis in Clinical Fetal Ultrasound Scans.","authors":"Clare Teng,&nbsp;Harshita Sharma,&nbsp;Lior Drukker,&nbsp;Aris T Papageorghiou,&nbsp;Alison J Noble","doi":"10.1145/3517031.3529635","DOIUrl":"https://doi.org/10.1145/3517031.3529635","url":null,"abstract":"<p><p>Visualising patterns in clinicians' eye movements while interpreting fetal ultrasound imaging videos is challenging. Across and within videos, there are differences in size an d position of Areas-of-Interest (AOIs) due to fetal position, movement and sonographer skill. Currently, AOIs are manually labelled or identified using eye-tracker manufacturer specifications which are not study specific. We propose using unsupervised clustering to identify meaningful AOIs and bi-contour plots to visualise spatio-temporal gaze characteristics. We use Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) to identify the AOIs, and use their corresponding images to capture granular changes within each AOI. Then we visualise transitions within and between AOIs as read by the sonographer. We compare our method to a standardised eye-tracking manufacturer algorithm. Our method captures granular changes in gaze characteristics which are otherwise not shown. Our method is suitable for exploratory data analysis of eye-tracking data involving multiple participants and AOIs.</p>","PeriodicalId":74558,"journal":{"name":"Proceedings. Eye Tracking Research & Applications Symposium","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7614061/pdf/EMS159392.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9558055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Eye Tracking: Background, Methods, and Applications 眼动追踪:背景、方法和应用
Pub Date : 2022-01-01 DOI: 10.1007/978-1-0716-2391-6
{"title":"Eye Tracking: Background, Methods, and Applications","authors":"","doi":"10.1007/978-1-0716-2391-6","DOIUrl":"https://doi.org/10.1007/978-1-0716-2391-6","url":null,"abstract":"","PeriodicalId":74558,"journal":{"name":"Proceedings. Eye Tracking Research & Applications Symposium","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84104055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Post-processing integration and semi-automated analysis of eye-tracking and motion-capture data obtained in immersive virtual reality environments to measure visuomotor integration. 对沉浸式虚拟现实环境中获得的眼动跟踪和动作捕捉数据进行后处理整合和半自动分析,以测量视觉运动整合。
Pub Date : 2021-05-01 DOI: 10.1145/3450341.3458881
Haylie L Miller, Ian R Zurutuza, Nicholas E Fears, Suleyman O Polat, Rodney D Nielsen

Mobile eye-tracking and motion-capture techniques yield rich, precisely quantifiable data that can inform our understanding of the relationship between visual and motor processes during task performance. However, these systems are rarely used in combination, in part because of the significant time and human resources required for post-processing and analysis. Recent advances in computer vision have opened the door for more efficient processing and analysis solutions. We developed a post-processing pipeline to integrate mobile eye-tracking and full-body motion-capture data. These systems were used simultaneously to measure visuomotor integration in an immersive virtual environment. Our approach enables calculation of a 3D gaze vector that can be mapped to the participant's body position and objects in the virtual environment using a uniform coordinate system. This approach is generalizable to other configurations, and enables more efficient analysis of eye, head, and body movements together during visuomotor tasks administered in controlled, repeatable environments.

移动眼动跟踪和运动捕捉技术可以产生丰富的、可精确量化的数据,有助于我们了解任务执行过程中视觉和运动过程之间的关系。然而,这些系统很少结合起来使用,部分原因是后期处理和分析需要大量的时间和人力资源。计算机视觉领域的最新进展为更高效的处理和分析解决方案打开了大门。我们开发了一个后处理管道,用于整合移动眼球跟踪和全身运动捕捉数据。这些系统同时用于测量沉浸式虚拟环境中的视觉运动整合。我们的方法能够计算三维注视矢量,该矢量可使用统一坐标系映射到虚拟环境中参与者的身体位置和物体。这种方法可用于其他配置,并能在可控、可重复的环境中执行视觉运动任务时更有效地分析眼睛、头部和身体的共同运动。
{"title":"Post-processing integration and semi-automated analysis of eye-tracking and motion-capture data obtained in immersive virtual reality environments to measure visuomotor integration.","authors":"Haylie L Miller, Ian R Zurutuza, Nicholas E Fears, Suleyman O Polat, Rodney D Nielsen","doi":"10.1145/3450341.3458881","DOIUrl":"10.1145/3450341.3458881","url":null,"abstract":"<p><p>Mobile eye-tracking and motion-capture techniques yield rich, precisely quantifiable data that can inform our understanding of the relationship between visual and motor processes during task performance. However, these systems are rarely used in combination, in part because of the significant time and human resources required for post-processing and analysis. Recent advances in computer vision have opened the door for more efficient processing and analysis solutions. We developed a post-processing pipeline to integrate mobile eye-tracking and full-body motion-capture data. These systems were used simultaneously to measure visuomotor integration in an immersive virtual environment. Our approach enables calculation of a 3D gaze vector that can be mapped to the participant's body position and objects in the virtual environment using a uniform coordinate system. This approach is generalizable to other configurations, and enables more efficient analysis of eye, head, and body movements together during visuomotor tasks administered in controlled, repeatable environments.</p>","PeriodicalId":74558,"journal":{"name":"Proceedings. Eye Tracking Research & Applications Symposium","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8276594/pdf/nihms-1718937.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39185504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fixational stability as a measure for the recovery of visual function in amblyopia. 视固定稳定性作为弱视视功能恢复的一项指标。
Pub Date : 2021-05-01 DOI: 10.1145/3450341.3458493
Avi M Aizenman, Dennis M Levi

People with amblyopia have been shown to have decreased fixational stability, particularly those with strabismic amblyopia. Fixational stability and visual acuity have been shown to be tightly correlated across multiple studies, suggesting a relationship between acuity and oculomotor stability. Reduced visual acuity is the sine qua non of amblyopia, and recovery is measured by the improvement in visual acuity. Here we ask whether fixational stability can be used as an objective marker for the recovery of visual function in amblyopia. We tracked children's fixational stability during patching treatment over time and found fixational stability changes alongside improvements in visual acuity. This suggests fixational stability can be used as an objective measure for monitoring treatment in amblyopia and other disorders.

弱视患者的注视稳定性下降,特别是那些斜视性弱视患者。多项研究表明,视动稳定性和视敏度密切相关,表明视敏度和动眼力稳定性之间存在关系。视力下降是弱视的必要条件,恢复是通过视力的改善来衡量的。在这里,我们想知道是否固定稳定性可以作为一个客观的标志,以恢复视力的弱视。随着时间的推移,我们跟踪了儿童在贴片治疗期间的注视稳定性,发现注视稳定性随着视力的改善而变化。这表明,固定稳定性可以作为监测治疗弱视和其他疾病的客观措施。
{"title":"Fixational stability as a measure for the recovery of visual function in amblyopia.","authors":"Avi M Aizenman,&nbsp;Dennis M Levi","doi":"10.1145/3450341.3458493","DOIUrl":"https://doi.org/10.1145/3450341.3458493","url":null,"abstract":"<p><p>People with amblyopia have been shown to have decreased fixational stability, particularly those with strabismic amblyopia. Fixational stability and visual acuity have been shown to be tightly correlated across multiple studies, suggesting a relationship between acuity and oculomotor stability. Reduced visual acuity is the sine qua non of amblyopia, and recovery is measured by the improvement in visual acuity. Here we ask whether fixational stability can be used as an objective marker for the recovery of visual function in amblyopia. We tracked children's fixational stability during patching treatment over time and found fixational stability changes alongside improvements in visual acuity. This suggests fixational stability can be used as an objective measure for monitoring treatment in amblyopia and other disorders.</p>","PeriodicalId":74558,"journal":{"name":"Proceedings. Eye Tracking Research & Applications Symposium","volume":"2021 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3450341.3458493","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9136268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Positional head-eye tracking outside the lab: an open-source solution. 实验室外的位置头眼追踪:一个开源的解决方案。
Pub Date : 2020-06-01 DOI: 10.1145/3379156.3391365
Peter Hausamann, Christian Sinnott, Paul R MacNeilage

Simultaneous head and eye tracking has traditionally been confined to a laboratory setting and real-world motion tracking limited to measuring linear acceleration and angular velocity. Recently available mobile devices such as the Pupil Core eye tracker and the Intel RealSense T265 motion tracker promise to deliver accurate measurements outside the lab. Here, the researchers propose a hard- and software framework that combines both devices into a robust, usable, low-cost head and eye tracking system. The developed software is open source and the required hardware modifications can be 3D printed. The researchers demonstrate the system's ability to measure head and eye movements in two tasks: an eyes-fixed head rotation task eliciting the vestibulo-ocular reflex inside the laboratory, and a natural locomotion task where a subject walks around a building outside of the laboratory. The resultant head and eye movements are discussed, as well as future implementations of this system.

头眼同步跟踪传统上仅限于实验室环境,而现实世界的运动跟踪仅限于测量线性加速度和角速度。最近可用的移动设备,如瞳孔核心眼动仪和英特尔RealSense T265运动跟踪器,承诺在实验室之外提供准确的测量。在这里,研究人员提出了一个硬件和软件框架,将这两个设备结合成一个强大的,可用的,低成本的头部和眼睛跟踪系统。开发的软件是开源的,所需的硬件修改可以3D打印。研究人员展示了该系统在两项任务中测量头部和眼球运动的能力:一项是在实验室内引起前庭眼反射的眼睛固定的头部旋转任务,另一项是在实验室外绕着建筑物走动的自然运动任务。讨论了由此产生的头部和眼球运动,以及该系统的未来实现。
{"title":"Positional head-eye tracking outside the lab: an open-source solution.","authors":"Peter Hausamann,&nbsp;Christian Sinnott,&nbsp;Paul R MacNeilage","doi":"10.1145/3379156.3391365","DOIUrl":"https://doi.org/10.1145/3379156.3391365","url":null,"abstract":"<p><p>Simultaneous head and eye tracking has traditionally been confined to a laboratory setting and real-world motion tracking limited to measuring linear acceleration and angular velocity. Recently available mobile devices such as the Pupil Core eye tracker and the Intel RealSense T265 motion tracker promise to deliver accurate measurements outside the lab. Here, the researchers propose a hard- and software framework that combines both devices into a robust, usable, low-cost head and eye tracking system. The developed software is open source and the required hardware modifications can be 3D printed. The researchers demonstrate the system's ability to measure head and eye movements in two tasks: an eyes-fixed head rotation task eliciting the vestibulo-ocular reflex inside the laboratory, and a natural locomotion task where a subject walks around a building outside of the laboratory. The resultant head and eye movements are discussed, as well as future implementations of this system.</p>","PeriodicalId":74558,"journal":{"name":"Proceedings. Eye Tracking Research & Applications Symposium","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3379156.3391365","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25530532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
GazeMetrics: An Open-Source Tool for Measuring the Data Quality of HMD-based Eye Trackers. GazeMetrics:一个用于测量基于hmd的眼动仪数据质量的开源工具。
Pub Date : 2020-06-01 DOI: 10.1145/3379156.3391374
Isayas B Adhanom, Samantha C Lee, Eelke Folmer, Paul MacNeilage
As virtual reality (VR) garners more attention for eye tracking research, knowledge of accuracy and precision of head-mounted display (HMD) based eye trackers becomes increasingly necessary. It is tempting to rely on manufacturer-provided information about the accuracy and precision of an eye tracker. However, unless data is collected under ideal conditions, these values seldom align with on-site metrics. Therefore, best practices dictate that accuracy and precision should be measured and reported for each study. To address this issue, we provide a novel open-source suite for rigorously measuring accuracy and precision for use with a variety of HMD-based eye trackers. This tool is customizable without having to alter the source code, but changes to the code allow for further alteration. The outputs are available in real time and easy to interpret, making eye tracking with VR more approachable for all users.
随着虚拟现实(VR)技术在眼动追踪研究中越来越受到关注,对基于头戴式显示器(HMD)的眼动仪的准确性和精密度的了解变得越来越有必要。人们很容易依赖制造商提供的有关眼动仪准确性和精度的信息。然而,除非在理想条件下收集数据,否则这些值很少与现场指标一致。因此,最佳实践要求对每项研究的准确性和精密度进行测量和报告。为了解决这个问题,我们提供了一个新颖的开源套件,用于严格测量各种基于hmd的眼动仪的准确性和精度。这个工具是可定制的,不需要更改源代码,但是对代码的更改允许进一步的更改。输出是实时的,易于解释,使眼动追踪与VR更接近所有用户。
{"title":"GazeMetrics: An Open-Source Tool for Measuring the Data Quality of HMD-based Eye Trackers.","authors":"Isayas B Adhanom,&nbsp;Samantha C Lee,&nbsp;Eelke Folmer,&nbsp;Paul MacNeilage","doi":"10.1145/3379156.3391374","DOIUrl":"https://doi.org/10.1145/3379156.3391374","url":null,"abstract":"As virtual reality (VR) garners more attention for eye tracking research, knowledge of accuracy and precision of head-mounted display (HMD) based eye trackers becomes increasingly necessary. It is tempting to rely on manufacturer-provided information about the accuracy and precision of an eye tracker. However, unless data is collected under ideal conditions, these values seldom align with on-site metrics. Therefore, best practices dictate that accuracy and precision should be measured and reported for each study. To address this issue, we provide a novel open-source suite for rigorously measuring accuracy and precision for use with a variety of HMD-based eye trackers. This tool is customizable without having to alter the source code, but changes to the code allow for further alteration. The outputs are available in real time and easy to interpret, making eye tracking with VR more approachable for all users.","PeriodicalId":74558,"journal":{"name":"Proceedings. Eye Tracking Research & Applications Symposium","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3379156.3391374","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25537137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
CIDER: Enhancing the Performance of Computational Eyeglasses. CIDER:增强计算眼镜的性能。
Pub Date : 2016-03-01 DOI: 10.1145/2857491.2884063
Addison Mayberry, Yamin Tun, Pan Hu, Duncan Smith-Freedman, Benjamin Marlin, Christopher Salthouse, Deepak Ganesan

The human eye offers a fascinating window into an individual's health, cognitive attention, and decision making, but we lack the ability to continually measure these parameters in the natural environment. We demonstrate CIDER, a system that operates in a highly optimized low-power mode under indoor settings by using a fast Search-Refine controller to track the eye, but detects when the environment switches to more challenging outdoor sunlight and switches models to operate robustly under this condition. Our design is holistic and tackles a) power consumption in digitizing pixels, estimating pupillary parameters, and illuminating the eye via near-infrared and b) error in estimating pupil center and pupil dilation. We demonstrate that CIDER can estimate pupil center with error less than two pixels (0.6°), and pupil diameter with error of one pixel (0.22mm). Our end-to-end results show that we can operate at power levels of roughly 7mW at a 4Hz eye tracking rate, or roughly 32mW at rates upwards of 250Hz.

人类的眼睛为观察个人的健康、认知注意力和决策能力提供了一个迷人的窗口,但我们缺乏在自然环境中持续测量这些参数的能力。我们演示了CIDER,这是一个在室内设置下高度优化的低功耗模式下运行的系统,通过使用快速搜索-精炼控制器来跟踪眼睛,但当环境切换到更具挑战性的室外阳光时,它会检测到,并在这种条件下切换模型以稳定运行。我们的设计是整体的,并解决了a)数字化像素,估计瞳孔参数和通过近红外照亮眼睛的功耗和b)估计瞳孔中心和瞳孔扩张的误差。我们证明CIDER可以估计瞳孔中心,误差小于2个像素(0.6°),瞳孔直径误差小于1个像素(0.22mm)。我们的端到端结果表明,我们可以在4Hz眼动追踪速率下以大约7mW的功率水平运行,或者在250Hz以上的速率下以大约32mW的功率水平运行。
{"title":"CIDER: Enhancing the Performance of Computational Eyeglasses.","authors":"Addison Mayberry,&nbsp;Yamin Tun,&nbsp;Pan Hu,&nbsp;Duncan Smith-Freedman,&nbsp;Benjamin Marlin,&nbsp;Christopher Salthouse,&nbsp;Deepak Ganesan","doi":"10.1145/2857491.2884063","DOIUrl":"https://doi.org/10.1145/2857491.2884063","url":null,"abstract":"<p><p>The human eye offers a fascinating window into an individual's health, cognitive attention, and decision making, but we lack the ability to continually measure these parameters in the natural environment. We demonstrate CIDER, a system that operates in a highly optimized low-power mode under indoor settings by using a fast Search-Refine controller to track the eye, but detects when the environment switches to more challenging outdoor sunlight and switches models to operate robustly under this condition. Our design is holistic and tackles a) power consumption in digitizing pixels, estimating pupillary parameters, and illuminating the eye via near-infrared and b) error in estimating pupil center and pupil dilation. We demonstrate that CIDER can estimate pupil center with error less than two pixels (0.6°), and pupil diameter with error of one pixel (0.22mm). Our end-to-end results show that we can operate at power levels of roughly 7mW at a 4Hz eye tracking rate, or roughly 32mW at rates upwards of 250Hz.</p>","PeriodicalId":74558,"journal":{"name":"Proceedings. Eye Tracking Research & Applications Symposium","volume":"2016 ","pages":"313-314"},"PeriodicalIF":0.0,"publicationDate":"2016-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/2857491.2884063","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35986484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
On Relationships Between Fixation Identification Algorithms and Fractal Box Counting Methods. 注视识别算法与分形盒计数方法的关系研究。
Pub Date : 2014-03-01 DOI: 10.1145/2578153.2578161
Quan Wang, Elizabeth Kim, Katarzyna Chawarska, Brian Scassellati, Steven Zucker, Frederick Shic

Fixation identification algorithms facilitate data comprehension and provide analytical convenience in eye-tracking analysis. However, current fixation algorithms for eye-tracking analysis are heavily dependent on parameter choices, leading to instabilities in results and incompleteness in reporting. This work examines the nature of human scanning patterns during complex scene viewing. We show that standard implementations of the commonly used distance-dispersion algorithm for fixation identification are functionally equivalent to greedy spatiotemporal tiling. We show that modeling the number of fixations as a function of tiling size leads to a measure of fractal dimensionality through box counting. We apply this technique to examine scale-free gaze behaviors in toddlers and adults looking at images of faces and blocks, as well as large number of adults looking at movies or static images. The distributional aspects of the number of fixations may suggest a fractal structure to gaze patterns in free scanning and imply that the incompleteness of standard algorithms may be due to the scale-free behaviors of the underlying scanning distributions. We discuss the nature of this hypothesis, its limitations, and offer directions for future work.

注视识别算法有利于数据理解,为眼动追踪分析提供分析便利。然而,目前眼球追踪分析的注视算法严重依赖于参数的选择,导致结果的不稳定性和报告的不完整性。这项工作考察了人类在复杂场景观看过程中扫描模式的本质。我们证明了用于注视识别的常用距离分散算法的标准实现在功能上等同于贪婪时空平铺。我们表明,将固定的数量建模为平铺大小的函数,可以通过盒计数来测量分形维数。我们应用这项技术来研究幼儿和成人在看人脸和方块图像时的无尺度凝视行为,以及大量观看电影或静态图像的成年人的无尺度凝视行为。注视次数的分布可能表明自由扫描中注视模式的分形结构,并暗示标准算法的不完备可能是由于底层扫描分布的无标度行为。我们讨论了这一假设的本质,它的局限性,并为未来的工作提供了方向。
{"title":"On Relationships Between Fixation Identification Algorithms and Fractal Box Counting Methods.","authors":"Quan Wang,&nbsp;Elizabeth Kim,&nbsp;Katarzyna Chawarska,&nbsp;Brian Scassellati,&nbsp;Steven Zucker,&nbsp;Frederick Shic","doi":"10.1145/2578153.2578161","DOIUrl":"https://doi.org/10.1145/2578153.2578161","url":null,"abstract":"<p><p>Fixation identification algorithms facilitate data comprehension and provide analytical convenience in eye-tracking analysis. However, current fixation algorithms for eye-tracking analysis are heavily dependent on parameter choices, leading to instabilities in results and incompleteness in reporting. This work examines the nature of human scanning patterns during complex scene viewing. We show that standard implementations of the commonly used distance-dispersion algorithm for fixation identification are functionally equivalent to greedy spatiotemporal tiling. We show that modeling the number of fixations as a function of tiling size leads to a measure of fractal dimensionality through box counting. We apply this technique to examine scale-free gaze behaviors in toddlers and adults looking at images of faces and blocks, as well as large number of adults looking at movies or static images. The distributional aspects of the number of fixations may suggest a fractal structure to gaze patterns in free scanning and imply that the incompleteness of standard algorithms may be due to the scale-free behaviors of the underlying scanning distributions. We discuss the nature of this hypothesis, its limitations, and offer directions for future work.</p>","PeriodicalId":74558,"journal":{"name":"Proceedings. Eye Tracking Research & Applications Symposium","volume":"2014 ","pages":"67-74"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/2578153.2578161","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34120805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings. Eye Tracking Research & Applications Symposium
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1