首页 > 最新文献

2022 Symposium on Eye Tracking Research and Applications最新文献

英文 中文
Eye Gaze on Scatterplot: Concept and First Results of Recommendations for Exploration of SPLOMs Using Implicit Data Selection 眼注视散点图:使用隐式数据选择探索SPLOMs建议的概念和初步结果
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3531165
Nils Rodrigues, Lin Shao, Jiazhen Yan, T. Schreck, D. Weiskopf
We propose a three-step concept and visual design for supporting the visual exploration of high-dimensional data in scatterplots through eye-tracking. First, we extract subsets in the underlying data using existing classifications, automated clustering algorithms, or eye-tracking. For the latter, we map gaze to the underlying data dimensions in the scatterplot. Clusters of data points that have been the focus of the viewers’ gaze are marked as clusters of interest (eye-mind hypothesis). In a second step, our concept extracts various properties from statistics and scagnostics from the clusters. The third step uses these measures to compare the current data clusters from the main scatterplot to the same data in other dimensions. The results enable analysts to retrieve similar or dissimilar views as guidance to explore the entire data set. We provide a proof-of-concept implementation as a test bench and describe a use case to show a practical application and initial results.
我们提出了一个三步走的概念和视觉设计,以支持通过眼动追踪在散点图中对高维数据的视觉探索。首先,我们使用现有的分类、自动聚类算法或眼动追踪从底层数据中提取子集。对于后者,我们将凝视映射到散点图中的底层数据维度。作为观看者注视焦点的数据点簇被标记为兴趣簇(眼-心假说)。在第二步中,我们的概念从统计数据中提取各种属性,从聚类中提取诊断信息。第三步使用这些度量将主散点图中的当前数据簇与其他维度的相同数据簇进行比较。结果使分析人员能够检索相似或不同的视图,作为探索整个数据集的指导。我们提供了一个概念验证实现作为测试平台,并描述了一个用例来展示实际应用程序和初始结果。
{"title":"Eye Gaze on Scatterplot: Concept and First Results of Recommendations for Exploration of SPLOMs Using Implicit Data Selection","authors":"Nils Rodrigues, Lin Shao, Jiazhen Yan, T. Schreck, D. Weiskopf","doi":"10.1145/3517031.3531165","DOIUrl":"https://doi.org/10.1145/3517031.3531165","url":null,"abstract":"We propose a three-step concept and visual design for supporting the visual exploration of high-dimensional data in scatterplots through eye-tracking. First, we extract subsets in the underlying data using existing classifications, automated clustering algorithms, or eye-tracking. For the latter, we map gaze to the underlying data dimensions in the scatterplot. Clusters of data points that have been the focus of the viewers’ gaze are marked as clusters of interest (eye-mind hypothesis). In a second step, our concept extracts various properties from statistics and scagnostics from the clusters. The third step uses these measures to compare the current data clusters from the main scatterplot to the same data in other dimensions. The results enable analysts to retrieve similar or dissimilar views as guidance to explore the entire data set. We provide a proof-of-concept implementation as a test bench and describe a use case to show a practical application and initial results.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131635131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
For Your Eyes Only: Privacy-preserving eye-tracking datasets 只为你的眼睛保护隐私的眼动追踪数据集
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529618
Brendan David-John, Kevin R. B. Butler, Eakta Jain
Eye-tracking is a critical source of information for understanding human behavior and developing future mixed-reality technology. Eye-tracking enables applications that classify user activity or predict user intent. However, eye-tracking datasets collected during common virtual reality tasks have also been shown to enable unique user identification, which creates a privacy risk. In this paper, we focus on the problem of user re-identification from eye-tracking features. We adapt standardized privacy definitions of k-anonymity and plausible deniability to protect datasets of eye-tracking features, and evaluate performance against re-identification by a standard biometric identification model on seven VR datasets. Our results demonstrate that re-identification goes down to chance levels for the privatized datasets, even as utility is preserved to levels higher than 72% accuracy in document type classification.
眼球跟踪是了解人类行为和开发未来混合现实技术的重要信息来源。眼球跟踪技术可用于对用户活动进行分类或预测用户意图。然而,在普通虚拟现实任务中收集的眼动跟踪数据集也被证明可以实现独特的用户识别,从而产生隐私风险。在本文中,我们重点讨论了从眼动跟踪特征中重新识别用户的问题。我们调整了 k-anonymity 和 plausible deniability 的标准化隐私定义,以保护眼动特征数据集,并在七个虚拟现实数据集上评估了标准生物识别模型的再识别性能。我们的结果表明,在私有化数据集上,重新识别率下降到偶然水平,即使在文档类型分类中,实用性也能保持在高于 72% 的准确率水平。
{"title":"For Your Eyes Only: Privacy-preserving eye-tracking datasets","authors":"Brendan David-John, Kevin R. B. Butler, Eakta Jain","doi":"10.1145/3517031.3529618","DOIUrl":"https://doi.org/10.1145/3517031.3529618","url":null,"abstract":"Eye-tracking is a critical source of information for understanding human behavior and developing future mixed-reality technology. Eye-tracking enables applications that classify user activity or predict user intent. However, eye-tracking datasets collected during common virtual reality tasks have also been shown to enable unique user identification, which creates a privacy risk. In this paper, we focus on the problem of user re-identification from eye-tracking features. We adapt standardized privacy definitions of k-anonymity and plausible deniability to protect datasets of eye-tracking features, and evaluate performance against re-identification by a standard biometric identification model on seven VR datasets. Our results demonstrate that re-identification goes down to chance levels for the privatized datasets, even as utility is preserved to levels higher than 72% accuracy in document type classification.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127608063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A gaze-based study design to explore how competency evolves during a photo manipulation task 一个基于凝视的研究设计,以探索能力如何演变在照片处理任务
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3531634
Nora Castner, Béla Umlauf, Ard Kastrati, M. Płomecka, William Schaefer, Enkelejda Kasneci, Z. Bylinskii
ACMReference Format: Nora Castner, Béla Umlauf, Ard Kastrati, Martyna Plomecka, William Schaefer, Enkelejda Kasneci, and Zoya Bylinskii. 2022. A gaze-based study design to explore how competency evolves during a photo manipulation task. In Symposium on Eye Tracking Research and Applications (ETRA ’22 Technical Abstracts), June 8–11, 2022, Seattle, Washington.ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3379155.3391320
参考格式:Nora Castner, b Umlauf, Ard Kastrati, Martyna Plomecka, William Schaefer, Enkelejda Kasneci和Zoya Bylinskii. 2022。一个基于凝视的研究设计,以探索能力如何演变在照片处理任务。在眼动追踪研究与应用研讨会(ETRA ' 22技术摘要),2022年6月8-11日,西雅图,华盛顿。ACM,纽约,美国,2页。https://doi.org/10.1145/3379155.3391320
{"title":"A gaze-based study design to explore how competency evolves during a photo manipulation task","authors":"Nora Castner, Béla Umlauf, Ard Kastrati, M. Płomecka, William Schaefer, Enkelejda Kasneci, Z. Bylinskii","doi":"10.1145/3517031.3531634","DOIUrl":"https://doi.org/10.1145/3517031.3531634","url":null,"abstract":"ACMReference Format: Nora Castner, Béla Umlauf, Ard Kastrati, Martyna Plomecka, William Schaefer, Enkelejda Kasneci, and Zoya Bylinskii. 2022. A gaze-based study design to explore how competency evolves during a photo manipulation task. In Symposium on Eye Tracking Research and Applications (ETRA ’22 Technical Abstracts), June 8–11, 2022, Seattle, Washington.ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3379155.3391320","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130754902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Introducing a Real-Time Advanced Eye Movements Analysis Pipeline 引入实时高级眼动分析管道
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3532196
Gavindya Jayawardena
Real-Time Advanced Eye Movements Analysis Pipeline (RAEMAP) is an advanced pipeline to analyze traditional positional gaze measurements as well as advanced eye gaze measurements. The proposed implementation of RAEMAP includes real-time analysis of fixations, saccades, gaze transition entropy, and low/high index of pupillary activity. RAEMAP will also provide visualizations of fixations, fixations on AOIs, heatmaps, and dynamic AOI generation in real-time. This paper outlines the proposed architecture of RAEMAP.
实时高级眼动分析管道(RAEMAP)是一种先进的管道来分析传统的位置凝视测量和先进的眼睛凝视测量。RAEMAP的实现包括对注视、扫视、凝视转移熵和瞳孔活动高低指数的实时分析。RAEMAP还将提供固定点的可视化、AOI上的固定点、热图和实时动态AOI生成。本文概述了RAEMAP的架构。
{"title":"Introducing a Real-Time Advanced Eye Movements Analysis Pipeline","authors":"Gavindya Jayawardena","doi":"10.1145/3517031.3532196","DOIUrl":"https://doi.org/10.1145/3517031.3532196","url":null,"abstract":"Real-Time Advanced Eye Movements Analysis Pipeline (RAEMAP) is an advanced pipeline to analyze traditional positional gaze measurements as well as advanced eye gaze measurements. The proposed implementation of RAEMAP includes real-time analysis of fixations, saccades, gaze transition entropy, and low/high index of pupillary activity. RAEMAP will also provide visualizations of fixations, fixations on AOIs, heatmaps, and dynamic AOI generation in real-time. This paper outlines the proposed architecture of RAEMAP.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133564078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Game Audio Impacts on Players’ Visual Attention, Model Performance for Cloud Gaming 游戏音频对玩家视觉注意力的影响,云游戏模型性能
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529621
Morva Saaty, M. Hashemi
Cloud gaming (CG) is a new approach to deliver a high-quality gaming experience to gamers anywhere, anytime, and on any device. To achieve this goal, CG requires a high bandwidth, which is still a major challenge. Many existing research pieces have focused on modeling or predicting the players’ Visual Attention Map (VAM) and allocating bitrate accordingly. Although studies indicate that both modalities of audio and video influence human perception, a few studies considered audio impacts in the cloud-based attention models. This paper demonstrates that the audio features in video games change the players’ VAMs in various game scenarios. Our findings indicated that incorporating game audio improves the accuracy of the predicted attention maps by 13% on average compared to the previous VAMs generated based on visual saliency by Game Attention Model for CG. The audio impact is more evident in video games with fewer visual components or indicators on the screen.
云游戏(CG)是一种向玩家提供高质量游戏体验的新方法,无论何时何地,在任何设备上。为了实现这一目标,CG需要高带宽,这仍然是一个重大挑战。许多现有的研究都集中在建模或预测玩家的视觉注意地图(VAM)并相应地分配比特率。尽管研究表明音频和视频两种模式都影响人类感知,但少数研究在基于云的注意力模型中考虑了音频的影响。本文论证了电子游戏中的音频特征在不同的游戏场景下会改变玩家的VAMs。我们的研究结果表明,与之前由CG游戏注意力模型基于视觉显著性生成的VAMs相比,结合游戏音频可将预测注意力地图的准确性平均提高13%。在屏幕上的视觉组件或指示器较少的电子游戏中,音频影响更为明显。
{"title":"Game Audio Impacts on Players’ Visual Attention, Model Performance for Cloud Gaming","authors":"Morva Saaty, M. Hashemi","doi":"10.1145/3517031.3529621","DOIUrl":"https://doi.org/10.1145/3517031.3529621","url":null,"abstract":"Cloud gaming (CG) is a new approach to deliver a high-quality gaming experience to gamers anywhere, anytime, and on any device. To achieve this goal, CG requires a high bandwidth, which is still a major challenge. Many existing research pieces have focused on modeling or predicting the players’ Visual Attention Map (VAM) and allocating bitrate accordingly. Although studies indicate that both modalities of audio and video influence human perception, a few studies considered audio impacts in the cloud-based attention models. This paper demonstrates that the audio features in video games change the players’ VAMs in various game scenarios. Our findings indicated that incorporating game audio improves the accuracy of the predicted attention maps by 13% on average compared to the previous VAMs generated based on visual saliency by Game Attention Model for CG. The audio impact is more evident in video games with fewer visual components or indicators on the screen.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134569336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Holographic Single-Pixel Stereo Camera Sensor for Calibration-free Eye-Tracking in Retinal Projection Augmented Reality Glasses 用于视网膜投影增强现实眼镜眼球跟踪的全息单像素立体相机传感器
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529616
Johannes Meyer, Tobias Wilm, Reinhold Fiess, T. Schlebusch, Wilhelm Stork, Enkelejda Kasneci
Eye-tracking is a key technology for future retinal projection based AR glasses as it enables techniques such as foveated rendering or gaze-driven exit pupil steering, which both increases the system’s overall performance. However, two of the major challenges video oculography systems face are robust gaze estimation in the presence of glasses slippage, paired with the necessity of frequent sensor calibration. To overcome these challenges, we propose a novel, calibration-free eye-tracking sensor for AR glasses based on a highly transparent holographic optical element (HOE) and a laser scanner. We fabricate a segmented HOE generating two stereo images of the eye-region. A single-pixel detector in combination with our stereo reconstruction algorithm is used to precisely calculate the gaze position. In our laboratory setup we demonstrate a calibration-free accuracy of 1.35° achieved by our eye-tracking sensor; highlighting the sensor’s suitability for consumer AR glasses.
眼球追踪是未来基于视网膜投影的AR眼镜的关键技术,因为它可以实现注视点渲染或凝视驱动的出瞳控制等技术,这两者都可以提高系统的整体性能。然而,视频视觉系统面临的两个主要挑战是在眼镜打滑的情况下进行鲁棒的凝视估计,以及频繁校准传感器的必要性。为了克服这些挑战,我们提出了一种基于高透明全息光学元件(HOE)和激光扫描仪的新型,无需校准的AR眼镜眼动追踪传感器。我们制造了一个分割的HOE,产生了眼睛区域的两个立体图像。采用单像素检测器结合立体重建算法精确计算凝视位置。在我们的实验室设置中,我们展示了我们的眼动追踪传感器实现的1.35°的免校准精度;突出了该传感器适用于消费类AR眼镜。
{"title":"A Holographic Single-Pixel Stereo Camera Sensor for Calibration-free Eye-Tracking in Retinal Projection Augmented Reality Glasses","authors":"Johannes Meyer, Tobias Wilm, Reinhold Fiess, T. Schlebusch, Wilhelm Stork, Enkelejda Kasneci","doi":"10.1145/3517031.3529616","DOIUrl":"https://doi.org/10.1145/3517031.3529616","url":null,"abstract":"Eye-tracking is a key technology for future retinal projection based AR glasses as it enables techniques such as foveated rendering or gaze-driven exit pupil steering, which both increases the system’s overall performance. However, two of the major challenges video oculography systems face are robust gaze estimation in the presence of glasses slippage, paired with the necessity of frequent sensor calibration. To overcome these challenges, we propose a novel, calibration-free eye-tracking sensor for AR glasses based on a highly transparent holographic optical element (HOE) and a laser scanner. We fabricate a segmented HOE generating two stereo images of the eye-region. A single-pixel detector in combination with our stereo reconstruction algorithm is used to precisely calculate the gaze position. In our laboratory setup we demonstrate a calibration-free accuracy of 1.35° achieved by our eye-tracking sensor; highlighting the sensor’s suitability for consumer AR glasses.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123618887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Geometry-Aware Eye Image-To-Image Translation 几何感知眼图像到图像的转换
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3532524
Conny Lu, Qian Zhang, K. Krishnakumar, Jixu Chen, H. Fuchs, S. Talathi, Kunlin Liu
Recently, image-to-image translation (I2I) has met with great success in computer vision, but few works have paid attention to the geometric changes that occur during translation. The geometric changes are necessary to reduce the geometric gap between domains at the cost of breaking correspondence between translated images and original ground truth. We propose a novel geometry-aware semi-supervised method to preserve this correspondence while still allowing geometric changes. The proposed method takes a synthetic image-mask pair as input and produces a corresponding real pair. We also utilize an objective function to ensure consistent geometric movement of the image and mask through the translation. Extensive experiments illustrate that our method yields a 11.23% higher mean Intersection-Over-Union than the current methods on the downstream eye segmentation task. The generated image has a 15.9% decrease in Frechet Inception Distance indicating higher image quality.
近年来,图像到图像的翻译(I2I)在计算机视觉中取得了巨大的成功,但很少有研究关注翻译过程中发生的几何变化。为了减小域间的几何间隙,需要进行几何变化,但代价是破坏了翻译图像与原始地面真值之间的对应关系。我们提出了一种新的几何感知半监督方法来保持这种对应关系,同时仍然允许几何变化。该方法以合成的图像掩码对作为输入,生成对应的实图像掩码对。我们还利用目标函数来确保图像和遮罩在平移过程中的几何运动一致。大量的实验表明,我们的方法在下游的眼睛分割任务上比目前的方法产生了11.23%的平均交叉- over - union。生成的图像的Frechet Inception距离降低了15.9%,表明图像质量更高。
{"title":"Geometry-Aware Eye Image-To-Image Translation","authors":"Conny Lu, Qian Zhang, K. Krishnakumar, Jixu Chen, H. Fuchs, S. Talathi, Kunlin Liu","doi":"10.1145/3517031.3532524","DOIUrl":"https://doi.org/10.1145/3517031.3532524","url":null,"abstract":"Recently, image-to-image translation (I2I) has met with great success in computer vision, but few works have paid attention to the geometric changes that occur during translation. The geometric changes are necessary to reduce the geometric gap between domains at the cost of breaking correspondence between translated images and original ground truth. We propose a novel geometry-aware semi-supervised method to preserve this correspondence while still allowing geometric changes. The proposed method takes a synthetic image-mask pair as input and produces a corresponding real pair. We also utilize an objective function to ensure consistent geometric movement of the image and mask through the translation. Extensive experiments illustrate that our method yields a 11.23% higher mean Intersection-Over-Union than the current methods on the downstream eye segmentation task. The generated image has a 15.9% decrease in Frechet Inception Distance indicating higher image quality.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124673093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Instant messaging multitasking while reading: a pilot eye-tracking study 阅读时的即时通讯多任务处理:一项实验性的眼球追踪研究
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529237
L. Altamura, L. Salmerón, Yvonne Kammerer
This pilot study analyzes the reading patterns of 15 German students while receiving instant messages through a smartphone, imitating an online conversation. With this pilot study, we aim to test the eye-tracking setup and methodology employed, in which we analyze specifically the moment in which participants return to the reading after answering the instant messages. We explore the relationships with reading comprehension performance and differences across readers, considering individual differences regarding reading habits and multitasking behavior.
这项初步研究分析了15名德国学生在通过智能手机接收即时消息时,模仿在线对话的阅读模式。在这项初步研究中,我们的目标是测试眼动追踪设置和所采用的方法,我们具体分析了参与者在回复即时消息后返回阅读的时刻。考虑到阅读习惯和多任务处理行为的个体差异,我们探讨了阅读理解表现与读者差异之间的关系。
{"title":"Instant messaging multitasking while reading: a pilot eye-tracking study","authors":"L. Altamura, L. Salmerón, Yvonne Kammerer","doi":"10.1145/3517031.3529237","DOIUrl":"https://doi.org/10.1145/3517031.3529237","url":null,"abstract":"This pilot study analyzes the reading patterns of 15 German students while receiving instant messages through a smartphone, imitating an online conversation. With this pilot study, we aim to test the eye-tracking setup and methodology employed, in which we analyze specifically the moment in which participants return to the reading after answering the instant messages. We explore the relationships with reading comprehension performance and differences across readers, considering individual differences regarding reading habits and multitasking behavior.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130064719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LSTMs can distinguish dental expert saccade behavior with high ”plaque-urracy” LSTMs可以识别牙菌斑准确度高的专家扫视行为。
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529631
Nora Castner, Jonas Frankemölle, C. Keutel, F. Huettig, Enkelejda Kasneci
Much of the current expertise literature has found that domain specific tasks evoke different eye movements. However, research has yet to predict optimal image exploration using saccadic information and to identify and quantify differences in the search strategies between learners, intermediates, and expert practitioners. By employing LSTMs for scanpath classification, we found saccade features over time could distinguish all groups at high accuracy. The most distinguishing features were saccade velocity peak (72%), length (70%), and velocity average (68%). These findings promote the holistic theory of expert visual exploration that experts can quickly process the whole scene using longer and more rapid saccade behavior initially. The potential to integrate expertise model development from saccadic scanpath features into intelligent tutoring systems is the ultimate inspiration for our research. Additionally, this model is not confined to visual exploration in dental xrays, rather it can extend to other medical domains.
目前的许多专业文献都发现,特定领域的任务会引起不同的眼球运动。然而,研究尚未预测使用眼动信息的最佳图像探索,并确定和量化学习者、中级学习者和专家从业者之间搜索策略的差异。通过使用lstm进行扫描路径分类,我们发现随着时间的推移,扫视特征可以高精度地区分所有组。最显著的特征是眼跳速度峰值(72%)、长度(70%)和平均速度(68%)。这些发现促进了专家视觉探索的整体理论,即专家可以使用更长的、更快的扫视行为来快速处理整个场景。将专家模型的发展从眼动扫描路径特征整合到智能辅导系统的潜力是我们研究的最终灵感。此外,该模型不仅局限于牙科x射线的视觉探测,还可以扩展到其他医学领域。
{"title":"LSTMs can distinguish dental expert saccade behavior with high ”plaque-urracy”","authors":"Nora Castner, Jonas Frankemölle, C. Keutel, F. Huettig, Enkelejda Kasneci","doi":"10.1145/3517031.3529631","DOIUrl":"https://doi.org/10.1145/3517031.3529631","url":null,"abstract":"Much of the current expertise literature has found that domain specific tasks evoke different eye movements. However, research has yet to predict optimal image exploration using saccadic information and to identify and quantify differences in the search strategies between learners, intermediates, and expert practitioners. By employing LSTMs for scanpath classification, we found saccade features over time could distinguish all groups at high accuracy. The most distinguishing features were saccade velocity peak (72%), length (70%), and velocity average (68%). These findings promote the holistic theory of expert visual exploration that experts can quickly process the whole scene using longer and more rapid saccade behavior initially. The potential to integrate expertise model development from saccadic scanpath features into intelligent tutoring systems is the ultimate inspiration for our research. Additionally, this model is not confined to visual exploration in dental xrays, rather it can extend to other medical domains.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128959914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
On the Use of Distribution-based Metrics for the Evaluation of Drivers’ Fixation Maps Against Spatial Baselines 基于分布的指标在空间基线下驾驶员注视图评价中的应用
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529629
Jaime Maldonado, Lino Antoni Giefer
A distinctive characteristic of human driver behavior is the spatial bias of gaze allocation toward the vanishing point of the road. This behavior can be evaluated by comparing fixation maps against a spatial-bias baseline using typical metrics such as the Pearson’s Correlation Coefficient (CC) and the Kullback-Leibler divergence (KL). CC and KL penalize false positives and negatives differently, which implies that they can be affected by the characteristics of the baseline. In this paper, we analyze the use of CC and KL for the evaluation of drivers’ fixation maps against two types of spatial-bias baselines: baselines obtained from recorded fixation maps (data-based) and 2D-Gaussian baselines (function-based). Our results indicate that the use of CC can lead to misleading interpretations due to single fixations outside of the spatial bias area when compared to data-based baselines. Thus, we argue that KL and CC should be considered simultaneously under specific modeling assumptions.
人类驾驶员行为的一个显著特征是视线分配的空间偏向于道路的消失点。这种行为可以通过使用典型指标(如Pearson相关系数(CC)和Kullback-Leibler散度(KL))将注视图与空间偏差基线进行比较来评估。CC和KL对假阳性和假阴性的惩罚不同,这意味着它们可能受到基线特征的影响。在本文中,我们分析了使用CC和KL对两种类型的空间偏差基线进行驾驶员注视图评估的方法:从记录的注视图(基于数据的)和2d高斯基线(基于函数的)获得的基线。我们的研究结果表明,与基于数据的基线相比,CC的使用可能会导致误导性的解释,因为它是在空间偏差区域之外的单一固定点。因此,我们认为在特定的建模假设下,KL和CC应该同时考虑。
{"title":"On the Use of Distribution-based Metrics for the Evaluation of Drivers’ Fixation Maps Against Spatial Baselines","authors":"Jaime Maldonado, Lino Antoni Giefer","doi":"10.1145/3517031.3529629","DOIUrl":"https://doi.org/10.1145/3517031.3529629","url":null,"abstract":"A distinctive characteristic of human driver behavior is the spatial bias of gaze allocation toward the vanishing point of the road. This behavior can be evaluated by comparing fixation maps against a spatial-bias baseline using typical metrics such as the Pearson’s Correlation Coefficient (CC) and the Kullback-Leibler divergence (KL). CC and KL penalize false positives and negatives differently, which implies that they can be affected by the characteristics of the baseline. In this paper, we analyze the use of CC and KL for the evaluation of drivers’ fixation maps against two types of spatial-bias baselines: baselines obtained from recorded fixation maps (data-based) and 2D-Gaussian baselines (function-based). Our results indicate that the use of CC can lead to misleading interpretations due to single fixations outside of the spatial bias area when compared to data-based baselines. Thus, we argue that KL and CC should be considered simultaneously under specific modeling assumptions.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117178712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2022 Symposium on Eye Tracking Research and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1