首页 > 最新文献

Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications最新文献

英文 中文
EyeMSA
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204565
Michael Burch, K. Kurzhals, Niklas Kleinhans, D. Weiskopf
Eye movement data can be regarded as a set of scan paths, each corresponding to one of the visual scanning strategies of a certain study participant. Finding common subsequences in those scan paths is a challenging task since they are typically not equally temporally long, do not consist of the same number of fixations, or do not lead along similar stimulus regions. In this paper we describe a technique based on pairwise and multiple sequence alignment to support a data analyst to see the most important patterns in the data. To reach this goal the scan paths are first transformed into a sequence of characters based on metrics as well as spatial and temporal aggregations. The result of the algorithmic data transformation is used as input for an interactive consensus matrix visualization. We illustrate the usefulness of the concepts by applying it to formerly recorded eye movement data investigating route finding tasks in public transport maps.
{"title":"EyeMSA","authors":"Michael Burch, K. Kurzhals, Niklas Kleinhans, D. Weiskopf","doi":"10.1145/3204493.3204565","DOIUrl":"https://doi.org/10.1145/3204493.3204565","url":null,"abstract":"Eye movement data can be regarded as a set of scan paths, each corresponding to one of the visual scanning strategies of a certain study participant. Finding common subsequences in those scan paths is a challenging task since they are typically not equally temporally long, do not consist of the same number of fixations, or do not lead along similar stimulus regions. In this paper we describe a technique based on pairwise and multiple sequence alignment to support a data analyst to see the most important patterns in the data. To reach this goal the scan paths are first transformed into a sequence of characters based on metrics as well as spatial and temporal aggregations. The result of the algorithmic data transformation is used as input for an interactive consensus matrix visualization. We illustrate the usefulness of the concepts by applying it to formerly recorded eye movement data investigating route finding tasks in public transport maps.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133527817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
EyeMR EyeMR
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208336
Tim Claudius Stratmann, Uwe Gruenefeld, Susanne C.J. Boll
Mixed Reality devices can either augment reality (AR) or create completely virtual realities (VR). Combined with head-mounted devices and eye-tracking, they enable users to interact with these systems in novel ways. However, current eye-tracking systems are expensive and limited in the interaction with virtual content. In this paper, we present EyeMR, a low-cost system (below 100$) that enables researchers to rapidly prototype new techniques for eye and gaze interactions. Our system supports mono- and binocular tracking (using Pupil Capture) and includes a Unity framework to support the fast development of new interaction techniques. We argue for the usefulness of EyeMR based on results of a user evaluation with HCI experts.
{"title":"EyeMR","authors":"Tim Claudius Stratmann, Uwe Gruenefeld, Susanne C.J. Boll","doi":"10.1145/3204493.3208336","DOIUrl":"https://doi.org/10.1145/3204493.3208336","url":null,"abstract":"Mixed Reality devices can either augment reality (AR) or create completely virtual realities (VR). Combined with head-mounted devices and eye-tracking, they enable users to interact with these systems in novel ways. However, current eye-tracking systems are expensive and limited in the interaction with virtual content. In this paper, we present EyeMR, a low-cost system (below 100$) that enables researchers to rapidly prototype new techniques for eye and gaze interactions. Our system supports mono- and binocular tracking (using Pupil Capture) and includes a Unity framework to support the fast development of new interaction techniques. We argue for the usefulness of EyeMR based on results of a user evaluation with HCI experts.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122237100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An eye gaze model for seismic interpretation support 一种支持地震解释的眼注视模型
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204554
Vagner Figuerêdo de Santana, J. Ferreira, R. Paula, Renato Cerqueira
Designing systems to offer support to experts during cognitive intensive tasks at the right time is still a challenging endeavor, despite years of research progress in the area. This paper proposes a gaze model based on eye tracking empirical data to identify when a system should proactively interact with the expert during visual inspection tasks. The gaze model derives from the analyses of a user study where 11 seismic interpreters were asked to perform the visual inspection task of seismic images from known and unknown basins. The eye tracking fixation patterns were triangulated with pupil dilations and thinking-aloud data. Results show that cumulative saccadic distances allow identifying when additional information could be offered to support seismic interpreters, changing the visual search behavior from exploratory to goal-directed.
尽管该领域的研究取得了多年的进展,但设计系统在正确的时间为专家在认知密集型任务中提供支持仍然是一项具有挑战性的努力。本文提出了一种基于眼动追踪经验数据的注视模型,用于识别系统在视觉检测任务中何时应该主动与专家进行交互。凝视模型来源于对一项用户研究的分析,该研究要求11名地震解释人员对已知和未知盆地的地震图像执行视觉检查任务。眼动追踪固定模式与瞳孔扩张和大声思考数据进行三角测量。结果表明,累积的跳变距离允许识别何时可以提供额外的信息来支持地震解释,将视觉搜索行为从探索性转变为目标导向。
{"title":"An eye gaze model for seismic interpretation support","authors":"Vagner Figuerêdo de Santana, J. Ferreira, R. Paula, Renato Cerqueira","doi":"10.1145/3204493.3204554","DOIUrl":"https://doi.org/10.1145/3204493.3204554","url":null,"abstract":"Designing systems to offer support to experts during cognitive intensive tasks at the right time is still a challenging endeavor, despite years of research progress in the area. This paper proposes a gaze model based on eye tracking empirical data to identify when a system should proactively interact with the expert during visual inspection tasks. The gaze model derives from the analyses of a user study where 11 seismic interpreters were asked to perform the visual inspection task of seismic images from known and unknown basins. The eye tracking fixation patterns were triangulated with pupil dilations and thinking-aloud data. Results show that cumulative saccadic distances allow identifying when additional information could be offered to support seismic interpreters, changing the visual search behavior from exploratory to goal-directed.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129039402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Evaluating similarity measures for gaze patterns in the context of representational competence in physics education 评价物理教育中表征能力背景下凝视模式的相似性措施
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204564
Saleh Mozaffari, P. Klein, J. Viiri, Sheraz Ahmed, J. Kuhn, A. Dengel
The competent handling of representations is required for understanding physics' concepts, developing problem-solving skills, and achieving scientific expertise. Using eye-tracking methodology, we present the contributions of this paper as follows: We first investigated the preferences of students with the different levels of knowledge; experts, intermediates, and novices, in representational competence in the domain of physics problem-solving. It reveals that experts more likely prefer to use vector than other representations. Besides, a similar tendency of table representation usage was observed in all groups. Also, diagram representation has been used less than others. Secondly, we evaluated three similarity measures; Levenshtein distance, transition entropy, and Jensen-Shannon divergence. Conducting Recursive Feature Elimination technique suggests Jensen-Shannon divergence is the best discriminating feature among the three. However, investigation on mutual dependency of the features implies transition entropy mutually links between two other features where it has mutual information with Levenshtein distance (Maximal Information Coefficient = 0.44) and has a correlation with Jensen-Shannon divergence (r(18313) = 0.70, p < .001).
理解物理概念、发展解决问题的能力和获得科学专业知识都需要有能力处理表征。采用眼动追踪方法,我们提出了本文的贡献如下:我们首先调查了不同知识水平学生的偏好;专家,中级和新手,在物理问题解决领域的代表性能力。它揭示了专家更倾向于使用向量而不是其他表示法。此外,在所有组中都观察到相似的表表示使用趋势。此外,图表示比其他表示使用得更少。其次,我们评估了三个相似性度量;Levenshtein距离,跃迁熵,和Jensen-Shannon散度。递归特征消去技术表明Jensen-Shannon散度是三者之间最好的判别特征。然而,对特征相互依赖性的研究表明,转移熵与其他两个特征之间相互联系,并且与Levenshtein距离具有互信息(maximum information Coefficient = 0.44),并且与Jensen-Shannon散度具有相关性(r(18313) = 0.70, p < .001)。
{"title":"Evaluating similarity measures for gaze patterns in the context of representational competence in physics education","authors":"Saleh Mozaffari, P. Klein, J. Viiri, Sheraz Ahmed, J. Kuhn, A. Dengel","doi":"10.1145/3204493.3204564","DOIUrl":"https://doi.org/10.1145/3204493.3204564","url":null,"abstract":"The competent handling of representations is required for understanding physics' concepts, developing problem-solving skills, and achieving scientific expertise. Using eye-tracking methodology, we present the contributions of this paper as follows: We first investigated the preferences of students with the different levels of knowledge; experts, intermediates, and novices, in representational competence in the domain of physics problem-solving. It reveals that experts more likely prefer to use vector than other representations. Besides, a similar tendency of table representation usage was observed in all groups. Also, diagram representation has been used less than others. Secondly, we evaluated three similarity measures; Levenshtein distance, transition entropy, and Jensen-Shannon divergence. Conducting Recursive Feature Elimination technique suggests Jensen-Shannon divergence is the best discriminating feature among the three. However, investigation on mutual dependency of the features implies transition entropy mutually links between two other features where it has mutual information with Levenshtein distance (Maximal Information Coefficient = 0.44) and has a correlation with Jensen-Shannon divergence (r(18313) = 0.70, p < .001).","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126468905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Development of diagnostic performance & visual processing in different types of radiological expertise 不同类型放射学专业诊断性能和视觉处理的发展
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204562
P. Kasprowski, Katarzyna Harężlak, S. Kasprowska
The aim of this research was to compare visual patterns while examining radiographs in groups of people with different levels and different types of expertise. Introducing the latter comparative base is the original contribution of these studies. The residents and specialists were trained in medical diagnosing of X-Rays and for these two groups it was possible to compare visual patterns between observers with different level of the same expertise type. On the other hand, the radiographers who took part in the examination - due to specific of their daily work - had experience in reading and evaluating X-Rays quality and were not trained in diagnosing. Involving this group created in our research the new opportunity to explore eye movements obtained when examining X-Ray for both medical diagnosing and quality assessment purposes, which may be treated as different types of expertise. We found that, despite the low diagnosing performance, the radiographers eye movement characteristics were more similar to the specialists than eye movement characteristics of the residents. It may be inferred that people with different type of expertise, yet after gaining a certain level of experience (or practise), may develop similar visual patterns which is the original conclusion of the research.
这项研究的目的是比较不同水平和不同专业知识的人群在检查x光片时的视觉模式。引入后者的比较基础是这些研究的原创性贡献。住院医生和专家接受过x射线医学诊断方面的培训,对于这两组人来说,可以比较具有不同水平的同一专业知识类型的观察者之间的视觉模式。另一方面,参与检查的放射技师由于其日常工作的特殊性,在阅读和评估x光片质量方面有经验,并没有接受过诊断方面的培训。这一群体的参与为我们的研究创造了新的机会,可以探索在医学诊断和质量评估目的检查x射线时获得的眼球运动,这可能被视为不同类型的专业知识。我们发现,尽管诊断效能较低,但放射技师的眼动特征更接近专科医师,而非住院医师的眼动特征。可以推断,具有不同类型专业知识的人,在获得一定程度的经验(或实践)后,可能会发展出相似的视觉模式,这是研究的原始结论。
{"title":"Development of diagnostic performance & visual processing in different types of radiological expertise","authors":"P. Kasprowski, Katarzyna Harężlak, S. Kasprowska","doi":"10.1145/3204493.3204562","DOIUrl":"https://doi.org/10.1145/3204493.3204562","url":null,"abstract":"The aim of this research was to compare visual patterns while examining radiographs in groups of people with different levels and different types of expertise. Introducing the latter comparative base is the original contribution of these studies. The residents and specialists were trained in medical diagnosing of X-Rays and for these two groups it was possible to compare visual patterns between observers with different level of the same expertise type. On the other hand, the radiographers who took part in the examination - due to specific of their daily work - had experience in reading and evaluating X-Rays quality and were not trained in diagnosing. Involving this group created in our research the new opportunity to explore eye movements obtained when examining X-Ray for both medical diagnosing and quality assessment purposes, which may be treated as different types of expertise. We found that, despite the low diagnosing performance, the radiographers eye movement characteristics were more similar to the specialists than eye movement characteristics of the residents. It may be inferred that people with different type of expertise, yet after gaining a certain level of experience (or practise), may develop similar visual patterns which is the original conclusion of the research.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"111 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122631845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A visual comparison of gaze behavior from pedestrians and cyclists 行人和骑自行车者凝视行为的视觉对比
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3214307
Mathias Trefzger, Tanja Blascheck, Michael Raschke, Sarah Hausmann, T. Schlegel
In this paper, we contribute an eye tracking study conducted with pedestrians and cyclists. We apply a visual analytics-based method to inspect pedestrians' and cyclists' gaze behavior as well as video recordings and accelerometer data. This method using multi-modal data allows us to explore patterns and extract common eye movement strategies. Our results are that participants paid most attention to the path itself; advertisements do not distract participants; participants focus more on pedestrians than on cyclists; pedestrians perform more shoulder checks than cyclists do; and we extracted common gaze sequences. Such an experiment in a real-world traffic environment allows us to understand realistic behavior of pedestrians and cyclists better.
在本文中,我们对行人和骑自行车的人进行了眼动追踪研究。我们采用基于视觉分析的方法来检查行人和骑自行车者的凝视行为以及视频记录和加速度计数据。这种使用多模态数据的方法允许我们探索模式并提取常见的眼动策略。我们的结果是,参与者最关注的是路径本身;广告不会分散参与者的注意力;参与者更关注行人而不是骑自行车的人;行人比骑自行车的人更多地检查肩膀;我们提取了常见的凝视序列。在真实的交通环境中进行这样的实验,可以让我们更好地理解行人和骑自行车的人的真实行为。
{"title":"A visual comparison of gaze behavior from pedestrians and cyclists","authors":"Mathias Trefzger, Tanja Blascheck, Michael Raschke, Sarah Hausmann, T. Schlegel","doi":"10.1145/3204493.3214307","DOIUrl":"https://doi.org/10.1145/3204493.3214307","url":null,"abstract":"In this paper, we contribute an eye tracking study conducted with pedestrians and cyclists. We apply a visual analytics-based method to inspect pedestrians' and cyclists' gaze behavior as well as video recordings and accelerometer data. This method using multi-modal data allows us to explore patterns and extract common eye movement strategies. Our results are that participants paid most attention to the path itself; advertisements do not distract participants; participants focus more on pedestrians than on cyclists; pedestrians perform more shoulder checks than cyclists do; and we extracted common gaze sequences. Such an experiment in a real-world traffic environment allows us to understand realistic behavior of pedestrians and cyclists better.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115731755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Asynchronous gaze sharing: towards a dynamic help system to support learners during program comprehension 异步凝视共享:在程序理解过程中支持学习者的动态帮助系统
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3207421
Fabian Deitelhoff
To participate in a society of a rapidly changing world, learning fundamentals of programming is important. However, learning to program is challenging for many novices and reading source code is one major obstacle in this challenge. The primary research objective of my dissertation is developing a help system based on historical and interactive eye tracking data to help novices master program comprehension. Helping novices requires detecting problematic situations while solving programming tasks using a classifier to split novices into successful/unsuccessful participants based on the answers given to program comprehension tasks. One set of features of this classifier is the story reading and execution reading order. The first step in my dissertation is creating a classifier for the reading order problem. The current status of this step is analyzing eye tracking datasets of novices and experts.
要想融入这个瞬息万变的社会,学习编程的基础知识是非常重要的。然而,学习编程对许多新手来说是一个挑战,而阅读源代码是这个挑战的一个主要障碍。本文的主要研究目标是开发一个基于历史和交互式眼动追踪数据的帮助系统,以帮助初学者理解硕士课程。帮助新手需要在解决编程任务时发现问题,使用分类器根据程序理解任务给出的答案将新手分为成功/不成功的参与者。该分类器的一组特征是故事阅读和执行阅读顺序。论文的第一步是为阅读顺序问题创建一个分类器。该步骤目前的状态是分析新手和专家的眼动追踪数据集。
{"title":"Asynchronous gaze sharing: towards a dynamic help system to support learners during program comprehension","authors":"Fabian Deitelhoff","doi":"10.1145/3204493.3207421","DOIUrl":"https://doi.org/10.1145/3204493.3207421","url":null,"abstract":"To participate in a society of a rapidly changing world, learning fundamentals of programming is important. However, learning to program is challenging for many novices and reading source code is one major obstacle in this challenge. The primary research objective of my dissertation is developing a help system based on historical and interactive eye tracking data to help novices master program comprehension. Helping novices requires detecting problematic situations while solving programming tasks using a classifier to split novices into successful/unsuccessful participants based on the answers given to program comprehension tasks. One set of features of this classifier is the story reading and execution reading order. The first step in my dissertation is creating a classifier for the reading order problem. The current status of this step is analyzing eye tracking datasets of novices and experts.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130942428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Suitability of calibration polynomials for eye-tracking data with simulated fixation inaccuracies 具有模拟注视误差的眼动数据标定多项式的适用性
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204586
William Rosengren, M. Nyström, B. Hammar, M. Stridh
Current video-based eye trackers are not suited for calibration of patients who cannot produce stable and accurate fixations. Reliable calibration is crucial in order to make repeatable recordings, which in turn are important to accurately measure the effects of a medical intervention. To test the suitability of different calibration polynomials for such patients, inaccurate calibration data were simulated using a geometric model of the EyeLink 1000 Plus desktop mode setup. This model is used to map eye position features to screen coordinates, creating screen data with known eye tracker data. This allows for objective evaluation of gaze estimation performance over the entire computer screen. Results show that the choice of calibration polynomial is crucial in order to ensure a high repeatability across measurements from patients who are hard to calibrate. Higher order calibration polynomials resulted in poor gaze estimation even for small simulated fixation inaccuracies.
目前基于视频的眼动仪不适合校准不能产生稳定和准确注视的患者。为了进行可重复的记录,可靠的校准是至关重要的,这反过来又对准确测量医疗干预的效果很重要。为了测试不同校准多项式对此类患者的适用性,使用EyeLink 1000 Plus桌面模式设置的几何模型模拟了不准确的校准数据。该模型用于将眼睛位置特征映射到屏幕坐标,利用已知的眼动仪数据创建屏幕数据。这允许在整个计算机屏幕上对注视估计性能进行客观评估。结果表明,为了确保难以校准的患者的测量具有高重复性,校准多项式的选择是至关重要的。高阶的校准多项式导致了较差的注视估计,即使是小的模拟注视误差。
{"title":"Suitability of calibration polynomials for eye-tracking data with simulated fixation inaccuracies","authors":"William Rosengren, M. Nyström, B. Hammar, M. Stridh","doi":"10.1145/3204493.3204586","DOIUrl":"https://doi.org/10.1145/3204493.3204586","url":null,"abstract":"Current video-based eye trackers are not suited for calibration of patients who cannot produce stable and accurate fixations. Reliable calibration is crucial in order to make repeatable recordings, which in turn are important to accurately measure the effects of a medical intervention. To test the suitability of different calibration polynomials for such patients, inaccurate calibration data were simulated using a geometric model of the EyeLink 1000 Plus desktop mode setup. This model is used to map eye position features to screen coordinates, creating screen data with known eye tracker data. This allows for objective evaluation of gaze estimation performance over the entire computer screen. Results show that the choice of calibration polynomial is crucial in order to ensure a high repeatability across measurements from patients who are hard to calibrate. Higher order calibration polynomials resulted in poor gaze estimation even for small simulated fixation inaccuracies.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128449300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Training operational monitoring in future ATCOs using eye tracking: extended abstract 用眼动追踪训练未来atco的操作监控:扩展摘要
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3207412
Carolina Barzantny
Improved technological possibilities continue to increase the significance of operational monitoring in air traffic control (ATC). The role of the air traffic controller (ATCO) will change in that they will have to monitor the operations of an automated system for failures. In order to take over control when automation fails, future ATCOs will need to be trained. While current ATC training is mainly based on performance indicators, this study will focus on the benefit of using eye tracking in future ATC training. Utilizing a low-fidelity operational monitoring task, a model of how attention should be allocated in case of malfunction will be derived. Based on this model, one group of ATC novices will receive training on how to allocate their attention appropriately (treatment). The other group will receive no training (control). Eye movements will be recorded to investigate how attention is allocated and if the training is successful. Performance measures will be used to evaluate the effectiveness of the training.
改进的技术可能性继续增加空中交通管制(ATC)业务监测的重要性。空中交通管制员(ATCO)的角色将发生变化,因为他们将不得不监控自动化系统的运行是否出现故障。为了在自动化故障时接管控制,未来的atco需要接受培训。目前的空管培训主要基于绩效指标,本研究将重点关注在未来空管培训中使用眼动追踪的好处。利用低保真度的运行监测任务,导出了在故障情况下应如何分配注意力的模型。基于这个模型,一组ATC新手将接受培训,学习如何适当分配他们的注意力(治疗)。另一组不接受任何训练(对照)。眼球运动将被记录下来,以调查注意力是如何分配的,以及训练是否成功。绩效指标将用于评估培训的有效性。
{"title":"Training operational monitoring in future ATCOs using eye tracking: extended abstract","authors":"Carolina Barzantny","doi":"10.1145/3204493.3207412","DOIUrl":"https://doi.org/10.1145/3204493.3207412","url":null,"abstract":"Improved technological possibilities continue to increase the significance of operational monitoring in air traffic control (ATC). The role of the air traffic controller (ATCO) will change in that they will have to monitor the operations of an automated system for failures. In order to take over control when automation fails, future ATCOs will need to be trained. While current ATC training is mainly based on performance indicators, this study will focus on the benefit of using eye tracking in future ATC training. Utilizing a low-fidelity operational monitoring task, a model of how attention should be allocated in case of malfunction will be derived. Based on this model, one group of ATC novices will receive training on how to allocate their attention appropriately (treatment). The other group will receive no training (control). Eye movements will be recorded to investigate how attention is allocated and if the training is successful. Performance measures will be used to evaluate the effectiveness of the training.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131124378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
How many words is a picture worth?: attention allocation on thumbnails versus title text regions 一幅画值多少字?:缩略图与标题文本区域的注意力分配
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204571
Chaitra Yangandul, S. Paryani, Madison Le, Eakta Jain
Cognitive scientists and psychologists have long noted the "picture superiority effect", that is, pictorial content is more likely to be remembered and more likely to lead to an increased understanding of the material. We investigated the relative importance of pictorial regions versus textual regions on a website where pictures and text co-occur in a very structured manner: video content sharing websites. We tracked participants' eye movements as they performed a casual browsing task, that is, selecting a video to watch. We found that participants allocated almost twice as much attention to thumbnails as to title text regions. They also tended to look at the thumbnail images before the title text, as predicted by the picture superiority effect. These results have implications for both user experience designers as well as video content creators.
认知科学家和心理学家早就注意到“图片优势效应”,即图片内容更容易被记住,也更容易导致对材料的理解增加。我们调查了图片区域与文本区域在一个网站上的相对重要性,其中图片和文本以非常结构化的方式共同出现:视频内容共享网站。我们在参与者进行随意的浏览任务(即选择要观看的视频)时跟踪了他们的眼球运动。我们发现参与者分配给缩略图的注意力几乎是标题文本区域的两倍。他们也倾向于在看标题文字之前看缩略图,正如图片优势效应所预测的那样。这些结果对用户体验设计师和视频内容创作者都有启示。
{"title":"How many words is a picture worth?: attention allocation on thumbnails versus title text regions","authors":"Chaitra Yangandul, S. Paryani, Madison Le, Eakta Jain","doi":"10.1145/3204493.3204571","DOIUrl":"https://doi.org/10.1145/3204493.3204571","url":null,"abstract":"Cognitive scientists and psychologists have long noted the \"picture superiority effect\", that is, pictorial content is more likely to be remembered and more likely to lead to an increased understanding of the material. We investigated the relative importance of pictorial regions versus textual regions on a website where pictures and text co-occur in a very structured manner: video content sharing websites. We tracked participants' eye movements as they performed a casual browsing task, that is, selecting a video to watch. We found that participants allocated almost twice as much attention to thumbnails as to title text regions. They also tended to look at the thumbnail images before the title text, as predicted by the picture superiority effect. These results have implications for both user experience designers as well as video content creators.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125311147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1