首页 > 最新文献

Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications最新文献

英文 中文
Sensitivity to natural 3D image transformations during eye movements 敏感的自然3D图像转换过程中的眼球运动
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204583
Maryam Keyvanara, R. Allison
The saccadic suppression effect, in which visual sensitivity is reduced significantly during saccades, has been suggested as a mechanism for masking graphic updates in a 3D virtual environment. In this study, we investigate whether the degree of saccadic suppression depends on the type of image change, particularly between different natural 3D scene transformations. The user observed 3D scenes and made a horizontal saccade in response to the displacement of a target object in the scene. During this saccade the entire scene translated or rotated. We studied six directions of transformation corresponding to the canonical directions for the six degrees of freedom. Following each trial, the user made a forced-choice indication of direction of the scene change. Results show that during horizontal saccades, the most recognizable changes were rotations along the roll axis.
跳眼抑制效应,即在跳眼过程中视觉敏感度显著降低,被认为是在三维虚拟环境中掩盖图形更新的一种机制。在这项研究中,我们研究了跳眼抑制的程度是否取决于图像变化的类型,特别是在不同的自然3D场景变换之间。用户观察3D场景,并根据场景中目标物体的位移做出水平扫视。在这个扫视过程中,整个场景被平移或旋转。我们研究了与六个自由度的正则方向相对应的六个变换方向。在每次尝试之后,用户都会做出一个强制选择来指示场景变化的方向。结果表明,在水平扫视过程中,最明显的变化是沿滚动轴的旋转。
{"title":"Sensitivity to natural 3D image transformations during eye movements","authors":"Maryam Keyvanara, R. Allison","doi":"10.1145/3204493.3204583","DOIUrl":"https://doi.org/10.1145/3204493.3204583","url":null,"abstract":"The saccadic suppression effect, in which visual sensitivity is reduced significantly during saccades, has been suggested as a mechanism for masking graphic updates in a 3D virtual environment. In this study, we investigate whether the degree of saccadic suppression depends on the type of image change, particularly between different natural 3D scene transformations. The user observed 3D scenes and made a horizontal saccade in response to the displacement of a target object in the scene. During this saccade the entire scene translated or rotated. We studied six directions of transformation corresponding to the canonical directions for the six degrees of freedom. Following each trial, the user made a forced-choice indication of direction of the scene change. Results show that during horizontal saccades, the most recognizable changes were rotations along the roll axis.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126235375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Mobile consumer shopping journey in fashion retail: eye tracking mobile apps and websites 时尚零售中的移动消费者购物之旅:眼动追踪移动应用程序和网站
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208335
Zofija Tupikovskaja-Omovie, D. Tyler
Despite the rapid adoption of smartphones among fashion consumers, their dissatisfaction with retailers' mobile apps and websites also increases. This suggests that understanding how mobile consumers use smartphones for shopping is important in developing digital shopping platforms fulfilling consumers' expectations. Research to date has not focused on eye tracking consumer shopping behavior using smartphones. For this research, we employed mobile eye tracking experiments in order to develop unique shopping journeys for each fashion consumer accounting for differences and similarities in their behavior. Based on scan path visualizations and shopping journeys we developed a precise account about the areas the majority of fashion consumers look at when browsing and inspecting product pages. Based on the findings, we identified mobile consumers' behaviour patterns, usability issues of the mobile channel and established what features the mobile retail channel needs to have to satisfy fashion consumers' needs by offering pleasing customer user experiences.
尽管时尚消费者迅速采用智能手机,但他们对零售商移动应用程序和网站的不满也在增加。这表明,了解移动消费者如何使用智能手机购物对于开发满足消费者期望的数字购物平台非常重要。到目前为止,研究还没有集中在使用智能手机的眼球追踪消费者的购物行为上。在这项研究中,我们采用了移动眼动追踪实验,以便为每个时尚消费者开发独特的购物旅程,以解释他们行为的差异和相似之处。基于扫描路径可视化和购物旅程,我们对大多数时尚消费者在浏览和检查产品页面时所关注的区域进行了精确的描述。根据调查结果,我们确定了移动消费者的行为模式、移动渠道的可用性问题,并确定了移动零售渠道需要具备哪些特征,以通过提供令人愉悦的客户用户体验来满足时尚消费者的需求。
{"title":"Mobile consumer shopping journey in fashion retail: eye tracking mobile apps and websites","authors":"Zofija Tupikovskaja-Omovie, D. Tyler","doi":"10.1145/3204493.3208335","DOIUrl":"https://doi.org/10.1145/3204493.3208335","url":null,"abstract":"Despite the rapid adoption of smartphones among fashion consumers, their dissatisfaction with retailers' mobile apps and websites also increases. This suggests that understanding how mobile consumers use smartphones for shopping is important in developing digital shopping platforms fulfilling consumers' expectations. Research to date has not focused on eye tracking consumer shopping behavior using smartphones. For this research, we employed mobile eye tracking experiments in order to develop unique shopping journeys for each fashion consumer accounting for differences and similarities in their behavior. Based on scan path visualizations and shopping journeys we developed a precise account about the areas the majority of fashion consumers look at when browsing and inspecting product pages. Based on the findings, we identified mobile consumers' behaviour patterns, usability issues of the mobile channel and established what features the mobile retail channel needs to have to satisfy fashion consumers' needs by offering pleasing customer user experiences.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114898412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Revisiting data normalization for appearance-based gaze estimation 重新审视基于外观的注视估计的数据规范化
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204548
Xucong Zhang, Yusuke Sugano, A. Bulling
Appearance-based gaze estimation is promising for unconstrained real-world settings, but the significant variability in head pose and user-camera distance poses significant challenges for training generic gaze estimators. Data normalization was proposed to cancel out this geometric variability by mapping input images and gaze labels to a normalized space. Although used successfully in prior works, the role and importance of data normalization remains unclear. To fill this gap, we study data normalization for the first time using principled evaluations on both simulated and real data. We propose a modification to the current data normalization formulation by removing the scaling factor and show that our new formulation performs significantly better (between 9.5% and 32.7%) in the different evaluation settings. Using images synthesized from a 3D face model, we demonstrate the benefit of data normalization for the efficiency of the model training. Experiments on real-world images confirm the advantages of data normalization in terms of gaze estimation performance.
基于外观的凝视估计在无约束的现实世界中很有前途,但是头部姿势和用户-相机距离的显著变化对训练通用的凝视估计器提出了重大挑战。通过将输入图像和凝视标签映射到归一化空间,提出了数据归一化来消除这种几何变异性。虽然在以前的工作中使用成功,但数据规范化的作用和重要性仍然不清楚。为了填补这一空白,我们首次使用模拟和真实数据的原则评估来研究数据归一化。我们通过去除比例因子对当前的数据归一化公式进行了修改,并表明我们的新公式在不同的评估设置中表现得更好(在9.5%到32.7%之间)。使用从三维人脸模型合成的图像,我们证明了数据归一化对模型训练效率的好处。在真实图像上的实验证实了数据归一化在注视估计性能方面的优势。
{"title":"Revisiting data normalization for appearance-based gaze estimation","authors":"Xucong Zhang, Yusuke Sugano, A. Bulling","doi":"10.1145/3204493.3204548","DOIUrl":"https://doi.org/10.1145/3204493.3204548","url":null,"abstract":"Appearance-based gaze estimation is promising for unconstrained real-world settings, but the significant variability in head pose and user-camera distance poses significant challenges for training generic gaze estimators. Data normalization was proposed to cancel out this geometric variability by mapping input images and gaze labels to a normalized space. Although used successfully in prior works, the role and importance of data normalization remains unclear. To fill this gap, we study data normalization for the first time using principled evaluations on both simulated and real data. We propose a modification to the current data normalization formulation by removing the scaling factor and show that our new formulation performs significantly better (between 9.5% and 32.7%) in the different evaluation settings. Using images synthesized from a 3D face model, we demonstrate the benefit of data normalization for the efficiency of the model training. Experiments on real-world images confirm the advantages of data normalization in terms of gaze estimation performance.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116795551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
Error-aware gaze-based interfaces for robust mobile gaze interaction 基于错误感知的注视界面,实现健壮的移动注视交互
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204536
Michael Barz, Florian Daiber, Daniel Sonntag, A. Bulling
Gaze estimation error can severely hamper usability and performance of mobile gaze-based interfaces given that the error varies constantly for different interaction positions. In this work, we explore error-aware gaze-based interfaces that estimate and adapt to gaze estimation error on-the-fly. We implement a sample error-aware user interface for gaze-based selection and different error compensation methods: a naïve approach that increases component size directly proportional to the absolute error, a recent model by Feit et al. that is based on the two-dimensional error distribution, and a novel predictive model that shifts gaze by a directional error estimate. We evaluate these models in a 12-participant user study and show that our predictive model significantly outperforms the others in terms of selection rate, particularly for small gaze targets. These results underline both the feasibility and potential of next generation error-aware gaze-based user interfaces.
注视估计误差会严重影响基于移动注视的界面的可用性和性能,因为在不同的交互位置,注视估计误差会不断变化。在这项工作中,我们探索了基于错误感知的注视接口,该接口可以实时估计和适应注视估计误差。我们为基于凝视的选择和不同的误差补偿方法实现了一个样本误差感知用户界面:naïve方法,该方法与绝对误差成正比地增加组件尺寸,Feit等人最近基于二维误差分布的模型,以及一个新的预测模型,该模型通过方向误差估计来转移凝视。我们在12个参与者的用户研究中评估了这些模型,并表明我们的预测模型在选择率方面明显优于其他模型,特别是对于小凝视目标。这些结果强调了下一代基于错误感知的基于注视的用户界面的可行性和潜力。
{"title":"Error-aware gaze-based interfaces for robust mobile gaze interaction","authors":"Michael Barz, Florian Daiber, Daniel Sonntag, A. Bulling","doi":"10.1145/3204493.3204536","DOIUrl":"https://doi.org/10.1145/3204493.3204536","url":null,"abstract":"Gaze estimation error can severely hamper usability and performance of mobile gaze-based interfaces given that the error varies constantly for different interaction positions. In this work, we explore error-aware gaze-based interfaces that estimate and adapt to gaze estimation error on-the-fly. We implement a sample error-aware user interface for gaze-based selection and different error compensation methods: a naïve approach that increases component size directly proportional to the absolute error, a recent model by Feit et al. that is based on the two-dimensional error distribution, and a novel predictive model that shifts gaze by a directional error estimate. We evaluate these models in a 12-participant user study and show that our predictive model significantly outperforms the others in terms of selection rate, particularly for small gaze targets. These results underline both the feasibility and potential of next generation error-aware gaze-based user interfaces.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117318769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Fixation detection for head-mounted eye tracking based on visual similarity of gaze targets 基于注视目标视觉相似性的头戴式眼动追踪注视检测
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204538
Julian Steil, Michael Xuelin Huang, A. Bulling
Fixations are widely analysed in human vision, gaze-based interaction, and experimental psychology research. However, robust fixation detection in mobile settings is profoundly challenging given the prevalence of user and gaze target motion. These movements feign a shift in gaze estimates in the frame of reference defined by the eye tracker's scene camera. To address this challenge, we present a novel fixation detection method for head-mounted eye trackers. Our method exploits that, independent of user or gaze target motion, target appearance remains about the same during a fixation. It extracts image information from small regions around the current gaze position and analyses the appearance similarity of these gaze patches across video frames to detect fixations. We evaluate our method using fine-grained fixation annotations on a five-participant indoor dataset (MPIIEgoFixation) with more than 2,300 fixations in total. Our method outperforms commonly used velocity- and dispersion-based algorithms, which highlights its significant potential to analyse scene image information for eye movement detection.
固着现象在人类视觉、基于注视的互动和实验心理学研究中得到了广泛的分析。然而,考虑到用户和凝视目标运动的普遍存在,在移动环境中进行稳健的注视检测是非常具有挑战性的。这些动作假装在眼动仪的现场摄像机定义的参照系中对凝视的估计发生了变化。为了解决这一挑战,我们提出了一种新的头戴式眼动仪的注视检测方法。我们的方法利用了这一点,即在注视过程中,独立于用户或凝视目标运动的目标外观保持大致相同。它从当前凝视位置周围的小区域提取图像信息,并分析这些凝视斑块在视频帧中的外观相似性,以检测注视。我们在一个五参与者室内数据集(MPIIEgoFixation)上使用细粒度的固定注释来评估我们的方法,该数据集总共有2300多个固定点。我们的方法优于常用的基于速度和色散的算法,这突出了它在分析场景图像信息以进行眼动检测方面的巨大潜力。
{"title":"Fixation detection for head-mounted eye tracking based on visual similarity of gaze targets","authors":"Julian Steil, Michael Xuelin Huang, A. Bulling","doi":"10.1145/3204493.3204538","DOIUrl":"https://doi.org/10.1145/3204493.3204538","url":null,"abstract":"Fixations are widely analysed in human vision, gaze-based interaction, and experimental psychology research. However, robust fixation detection in mobile settings is profoundly challenging given the prevalence of user and gaze target motion. These movements feign a shift in gaze estimates in the frame of reference defined by the eye tracker's scene camera. To address this challenge, we present a novel fixation detection method for head-mounted eye trackers. Our method exploits that, independent of user or gaze target motion, target appearance remains about the same during a fixation. It extracts image information from small regions around the current gaze position and analyses the appearance similarity of these gaze patches across video frames to detect fixations. We evaluate our method using fine-grained fixation annotations on a five-participant indoor dataset (MPIIEgoFixation) with more than 2,300 fixations in total. Our method outperforms commonly used velocity- and dispersion-based algorithms, which highlights its significant potential to analyse scene image information for eye movement detection.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124807199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Hidden pursuits: evaluating gaze-selection via pursuits when the stimuli's trajectory is partially hidden 隐藏追求:当刺激物的轨迹部分隐藏时,通过追求来评估目光选择
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204569
Thomas Mattusch, Mahsa Mirzamohammad, M. Khamis, A. Bulling, Florian Alt
The idea behind gaze interaction using Pursuits is to leverage the human's smooth pursuit eye movements performed when following moving targets. However, humans can also anticipate where a moving target would reappear if it temporarily hides from their view. In this work, we investigate how well users can select targets using Pursuits in cases where the target's trajectory is partially invisible (HiddenPursuits): e.g., can users select a moving target that temporarily hides behind another object? Although HiddenPursuits was not studied in the context of interaction before, understanding how well users can perform HiddenPursuits presents numerous opportunities, particularly for small interfaces where a target's trajectory can cover area outside of the screen. We found that users can still select targets quickly via Pursuits even if their trajectory is up to 50% hidden, and at the expense of longer selection times when the hidden portion is larger. We discuss how gaze-based interfaces can leverage HiddenPursuits for an improved user experience.
使用“追求”进行凝视交互的想法是利用人类在跟踪移动目标时流畅的眼球运动。然而,如果一个移动的目标暂时从他们的视线中消失,人类也可以预测它会在哪里重新出现。在这项工作中,我们研究了在目标轨迹部分不可见的情况下,用户如何使用追击选择目标:例如,用户可以选择暂时隐藏在另一个物体后面的移动目标吗?虽然之前没有在交互环境中研究过hiddenpursuit,但了解用户执行hiddenpursuit的能力提供了许多机会,特别是对于目标轨迹可以覆盖屏幕外区域的小型界面。我们发现,即使他们的轨迹隐藏了50%,用户仍然可以通过追击快速选择目标,而当隐藏部分更大时,选择时间会更长。我们讨论了基于注视的界面如何利用hiddenpursuit来改善用户体验。
{"title":"Hidden pursuits: evaluating gaze-selection via pursuits when the stimuli's trajectory is partially hidden","authors":"Thomas Mattusch, Mahsa Mirzamohammad, M. Khamis, A. Bulling, Florian Alt","doi":"10.1145/3204493.3204569","DOIUrl":"https://doi.org/10.1145/3204493.3204569","url":null,"abstract":"The idea behind gaze interaction using Pursuits is to leverage the human's smooth pursuit eye movements performed when following moving targets. However, humans can also anticipate where a moving target would reappear if it temporarily hides from their view. In this work, we investigate how well users can select targets using Pursuits in cases where the target's trajectory is partially invisible (HiddenPursuits): e.g., can users select a moving target that temporarily hides behind another object? Although HiddenPursuits was not studied in the context of interaction before, understanding how well users can perform HiddenPursuits presents numerous opportunities, particularly for small interfaces where a target's trajectory can cover area outside of the screen. We found that users can still select targets quickly via Pursuits even if their trajectory is up to 50% hidden, and at the expense of longer selection times when the hidden portion is larger. We discuss how gaze-based interfaces can leverage HiddenPursuits for an improved user experience.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128304104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Deep learning vs. manual annotation of eye movements 深度学习与手动注释眼球运动
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208346
Mikhail Startsev, I. Agtzidis, M. Dorr
Deep Learning models have revolutionized many research fields already. However, the raw eye movement data is still typically processed into discrete events via threshold-based algorithms or manual labelling. In this work, we describe a compact 1D CNN model, which we combined with BLSTM to achieve end-to-end sequence-to-sequence learning. We discuss the acquisition process for the ground truth that we use, as well as the performance of our approach, in comparison to various literature models and manual raters. Our deep method demonstrates superior performance, which brings us closer to human-level labelling quality.
深度学习模型已经彻底改变了许多研究领域。然而,原始眼动数据通常仍然通过基于阈值的算法或手动标记处理成离散的事件。在这项工作中,我们描述了一个紧凑的1D CNN模型,我们将其与BLSTM相结合来实现端到端的序列到序列学习。与各种文献模型和手动评级器相比,我们讨论了我们使用的基础真理的获取过程,以及我们方法的性能。我们的深度方法表现出卓越的性能,使我们更接近人类水平的标签质量。
{"title":"Deep learning vs. manual annotation of eye movements","authors":"Mikhail Startsev, I. Agtzidis, M. Dorr","doi":"10.1145/3204493.3208346","DOIUrl":"https://doi.org/10.1145/3204493.3208346","url":null,"abstract":"Deep Learning models have revolutionized many research fields already. However, the raw eye movement data is still typically processed into discrete events via threshold-based algorithms or manual labelling. In this work, we describe a compact 1D CNN model, which we combined with BLSTM to achieve end-to-end sequence-to-sequence learning. We discuss the acquisition process for the ground truth that we use, as well as the performance of our approach, in comparison to various literature models and manual raters. Our deep method demonstrates superior performance, which brings us closer to human-level labelling quality.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"31 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120998896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robustness of metrics used for scanpath comparison 用于扫描路径比较的度量的鲁棒性
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204580
F. Děchtěrenko, J. Lukavský
In every quantitative eye tracking research study, researchers need to compare eye movements between subjects or conditions. For both static and dynamic tasks, there is a variety of metrics that could serve this purpose. It is important to explore the robustness of the metrics with respect to artificial noise. For dynamic tasks, where eye movement data is represented as scanpaths, there are currently no studies regarding the robustness of the metrics. In this study, we explored properties of five metrics (Levenshtein distance, correlation distance, Fréchet distance, mean and median distance) used for comparison of scanpaths. We systematically added noise by applying three transformations to the scanpaths: translation, rotation, and scaling. For each metric, we computed baseline similarity for two random scanpaths and explored the metrics' sensitivity. Our results allow other researchers to convert results between studies.
在每一项定量眼动追踪研究中,研究人员都需要比较不同受试者或不同条件下的眼动。对于静态和动态任务,有各种各样的指标可以达到这个目的。探讨指标相对于人工噪声的鲁棒性是很重要的。对于动态任务,眼动数据被表示为扫描路径,目前还没有关于指标鲁棒性的研究。在本研究中,我们探讨了用于扫描路径比较的五个度量(Levenshtein距离、相关距离、fr距离、平均和中位数距离)的性质。我们通过对扫描路径应用三种变换来系统地添加噪声:平移、旋转和缩放。对于每个指标,我们计算了两个随机扫描路径的基线相似性,并探讨了指标的敏感性。我们的研究结果允许其他研究人员在研究之间转换结果。
{"title":"Robustness of metrics used for scanpath comparison","authors":"F. Děchtěrenko, J. Lukavský","doi":"10.1145/3204493.3204580","DOIUrl":"https://doi.org/10.1145/3204493.3204580","url":null,"abstract":"In every quantitative eye tracking research study, researchers need to compare eye movements between subjects or conditions. For both static and dynamic tasks, there is a variety of metrics that could serve this purpose. It is important to explore the robustness of the metrics with respect to artificial noise. For dynamic tasks, where eye movement data is represented as scanpaths, there are currently no studies regarding the robustness of the metrics. In this study, we explored properties of five metrics (Levenshtein distance, correlation distance, Fréchet distance, mean and median distance) used for comparison of scanpaths. We systematically added noise by applying three transformations to the scanpaths: translation, rotation, and scaling. For each metric, we computed baseline similarity for two random scanpaths and explored the metrics' sensitivity. Our results allow other researchers to convert results between studies.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"286 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116453743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The eye of the typer: a benchmark and analysis of gaze behavior during typing 打字者的眼睛:打字时注视行为的基准和分析
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204552
Alexandra Papoutsaki, Aaron Gokaslan, J. Tompkin, Yuze He, Jeff Huang
We examine the relationship between eye gaze and typing, focusing on the differences between touch and non-touch typists. To enable typing-based research, we created a 51-participant benchmark dataset for user input across multiple tasks, including user input data, screen recordings, webcam video of the participant's face, and eye tracking positions. There are patterns of eye movements that differ between the two types of typists, representing glances at the keyboard, which can be used to identify touch-.typed strokes with 92% accuracy. Then, we relate eye gaze with cursor activity, aligning both pointing and typing to eye gaze. One demonstrative application of the work is in extending WebGazer, a real-time web-browser-based webcam eye tracker. We show that incorporating typing behavior as a secondary signal improves eye tracking accuracy by 16% for touch typists, and 8% for non-touch typists.
我们研究了眼睛注视和打字之间的关系,重点是触摸打字者和非触摸打字者之间的差异。为了实现基于类型的研究,我们创建了一个51个参与者的基准数据集,用于跨多个任务的用户输入,包括用户输入数据、屏幕记录、参与者面部的网络摄像头视频和眼动追踪位置。两种类型的打字员有不同的眼球运动模式,表示对键盘的扫视,这可以用来识别触摸。键入笔画的准确率为92%。然后,我们将眼睛注视与光标活动联系起来,将指向和打字与眼睛注视对齐。这项工作的一个示范应用是扩展WebGazer,一个基于实时网络浏览器的网络摄像头眼动仪。我们的研究表明,将打字行为作为次要信号,可以使触控打字者的眼动追踪准确率提高16%,非触控打字者的眼动追踪准确率提高8%。
{"title":"The eye of the typer: a benchmark and analysis of gaze behavior during typing","authors":"Alexandra Papoutsaki, Aaron Gokaslan, J. Tompkin, Yuze He, Jeff Huang","doi":"10.1145/3204493.3204552","DOIUrl":"https://doi.org/10.1145/3204493.3204552","url":null,"abstract":"We examine the relationship between eye gaze and typing, focusing on the differences between touch and non-touch typists. To enable typing-based research, we created a 51-participant benchmark dataset for user input across multiple tasks, including user input data, screen recordings, webcam video of the participant's face, and eye tracking positions. There are patterns of eye movements that differ between the two types of typists, representing glances at the keyboard, which can be used to identify touch-.typed strokes with 92% accuracy. Then, we relate eye gaze with cursor activity, aligning both pointing and typing to eye gaze. One demonstrative application of the work is in extending WebGazer, a real-time web-browser-based webcam eye tracker. We show that incorporating typing behavior as a secondary signal improves eye tracking accuracy by 16% for touch typists, and 8% for non-touch typists.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124394534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Automatic detection and inhibition of neutral and emotional stimuli in post-traumatic stress disorder: an eye-tracking study: eye-tracking data of an original antisaccade task 创伤后应激障碍中性和情绪刺激的自动检测和抑制:一项眼动追踪研究:一项原始反扫视任务的眼动追踪数据
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3207419
Wivine Blekić, M. Rossignol
This research project addresses the understanding of attentional biases post-traumatic stress disorder (PTSD). This psychiatric condition is mainly characterized by symptoms of intrusion (flashbacks), avoidance, alteration of arousal and reactivity (hypervigilance), and negative mood and cognitions persisting one month after the exposure of a traumatic event [American Psychiatric Association 2013]. Clinical observations as well as empirical research highlighted the symptom of hypervigilance as being central in the PTSD symptomatology, considering that other clinical features could be maintained by it [Ehlers and Clark 2000]. Attentional Control theory has described the hypervigilance in anxious disorders as the co-occurrence of two cognitive processes : an enhanced detection of threatening information followed by difficulties to inhibit their processing [Eysenck et al. 2007]. Nevertheless, attentional control theory has never been applied to PTSD. This project aims at providing cognitive evidence of hypervigilance symptoms in PTSD using eye-tracking during the realization of reliable Miyake tasks [Eysenck and Derakshan 2011]. Therefore, our first aim is to model the co-occurring processes of hypervigilance using eye-tracking technology. Indeed, behavioral measures (as reaction time) do not allow a clear representation of cognitive processes occurring subconsciously in a few milliseconds [Felmingham 2016]. Therefore, eye-tracking technology is essential in our studies. Secondly, we aim to analyze the differential impact of trauma-related stimulus vs negative stimuli on PTSD patients, by conducting scan paths following both of those stimuli presentation. This research project is divided into four studies. The first one will be described is this doctoral symposium.
本研究旨在了解创伤后应激障碍(PTSD)的注意偏差。这种精神疾病的主要特征是在创伤性事件暴露后持续一个月的入侵(闪回)、逃避、觉醒和反应性改变(高度警惕)以及消极情绪和认知的症状[美国精神病学协会2013]。临床观察和实证研究都强调了过度警觉性是PTSD症状学的核心,并认为它可以维持其他临床特征[Ehlers和Clark 2000]。注意控制理论将焦虑障碍中的高警惕性描述为两种认知过程的共同发生:对威胁信息的检测增强,随后难以抑制其处理[Eysenck et al. 2007]。然而,注意控制理论从未被应用于PTSD。本项目旨在利用眼动追踪技术,为实现可靠的Miyake任务过程中PTSD高警惕性症状提供认知证据[Eysenck and Derakshan 2011]。因此,我们的第一个目标是使用眼动追踪技术来模拟过度警惕的共同发生过程。事实上,行为测量(如反应时间)并不能清楚地反映几毫秒内潜意识中发生的认知过程[Felmingham 2016]。因此,眼动追踪技术在我们的研究中是必不可少的。其次,我们的目的是分析创伤相关刺激与负面刺激对创伤后应激障碍患者的不同影响,通过扫描路径跟踪这两种刺激的表现。这个研究项目分为四个部分。首先要介绍的是这次博士研讨会。
{"title":"Automatic detection and inhibition of neutral and emotional stimuli in post-traumatic stress disorder: an eye-tracking study: eye-tracking data of an original antisaccade task","authors":"Wivine Blekić, M. Rossignol","doi":"10.1145/3204493.3207419","DOIUrl":"https://doi.org/10.1145/3204493.3207419","url":null,"abstract":"This research project addresses the understanding of attentional biases post-traumatic stress disorder (PTSD). This psychiatric condition is mainly characterized by symptoms of intrusion (flashbacks), avoidance, alteration of arousal and reactivity (hypervigilance), and negative mood and cognitions persisting one month after the exposure of a traumatic event [American Psychiatric Association 2013]. Clinical observations as well as empirical research highlighted the symptom of hypervigilance as being central in the PTSD symptomatology, considering that other clinical features could be maintained by it [Ehlers and Clark 2000]. Attentional Control theory has described the hypervigilance in anxious disorders as the co-occurrence of two cognitive processes : an enhanced detection of threatening information followed by difficulties to inhibit their processing [Eysenck et al. 2007]. Nevertheless, attentional control theory has never been applied to PTSD. This project aims at providing cognitive evidence of hypervigilance symptoms in PTSD using eye-tracking during the realization of reliable Miyake tasks [Eysenck and Derakshan 2011]. Therefore, our first aim is to model the co-occurring processes of hypervigilance using eye-tracking technology. Indeed, behavioral measures (as reaction time) do not allow a clear representation of cognitive processes occurring subconsciously in a few milliseconds [Felmingham 2016]. Therefore, eye-tracking technology is essential in our studies. Secondly, we aim to analyze the differential impact of trauma-related stimulus vs negative stimuli on PTSD patients, by conducting scan paths following both of those stimuli presentation. This research project is divided into four studies. The first one will be described is this doctoral symposium.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126619394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1