首页 > 最新文献

Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications最新文献

英文 中文
Robust eye contact detection in natural multi-person interactions using gaze and speaking behaviour 利用注视和说话行为在自然多人互动中进行稳健的目光接触检测
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204549
P. Müller, Michael Xuelin Huang, Xucong Zhang, A. Bulling
Eye contact is one of the most important non-verbal social cues and fundamental to human interactions. However, detecting eye contact without specialised eye tracking equipment poses significant challenges, particularly for multiple people in real-world settings. We present a novel method to robustly detect eye contact in natural three- and four-person interactions using off-the-shelf ambient cameras. Our method exploits that, during conversations, people tend to look at the person who is currently speaking. Harnessing the correlation between people's gaze and speaking behaviour therefore allows our method to automatically acquire training data during deployment and adaptively train eye contact detectors for each target user. We empirically evaluate the performance of our method on a recent dataset of natural group interactions and demonstrate that it achieves a relative improvement over the state-of-the-art method of more than 60%, and also improves over a head pose based baseline.
目光接触是最重要的非语言社交线索之一,也是人类互动的基础。然而,在没有专门的眼动追踪设备的情况下检测眼神接触会带来巨大的挑战,特别是对于现实世界中的多人。我们提出了一种新颖的方法,可以使用现成的环境相机在自然的三人和四人互动中健壮地检测眼神接触。我们的方法利用了这一点,在谈话中,人们倾向于看着正在说话的人。因此,利用人们的凝视和说话行为之间的相关性,我们的方法可以在部署过程中自动获取训练数据,并自适应地为每个目标用户训练眼神接触检测器。我们在最近的自然群体互动数据集上对我们的方法进行了实证评估,并证明它比最先进的方法实现了60%以上的相对改进,并且也比基于头部姿势的基线有所改善。
{"title":"Robust eye contact detection in natural multi-person interactions using gaze and speaking behaviour","authors":"P. Müller, Michael Xuelin Huang, Xucong Zhang, A. Bulling","doi":"10.1145/3204493.3204549","DOIUrl":"https://doi.org/10.1145/3204493.3204549","url":null,"abstract":"Eye contact is one of the most important non-verbal social cues and fundamental to human interactions. However, detecting eye contact without specialised eye tracking equipment poses significant challenges, particularly for multiple people in real-world settings. We present a novel method to robustly detect eye contact in natural three- and four-person interactions using off-the-shelf ambient cameras. Our method exploits that, during conversations, people tend to look at the person who is currently speaking. Harnessing the correlation between people's gaze and speaking behaviour therefore allows our method to automatically acquire training data during deployment and adaptively train eye contact detectors for each target user. We empirically evaluate the performance of our method on a recent dataset of natural group interactions and demonstrate that it achieves a relative improvement over the state-of-the-art method of more than 60%, and also improves over a head pose based baseline.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115850071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Implicit user calibration for gaze-tracking systems using an averaged saliency map around the optical axis of the eye 使用眼睛光轴周围的平均显著性图的注视跟踪系统的隐式用户校准
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204572
Mamoru Hiroe, Michiya Yamamoto, Takashi Nagamatsu
A 3D gaze-tracking method that uses two cameras and two light sources can measure the optical axis of the eye without user calibration. The visual axis of the eye (line of sight) is estimated by conducting a single-point user calibration. This single-point user calibration estimates the angle k that is offset between the optical and visual axes of the eye, which is a user-dependent parameter. We have proposed an implicit user calibration method for gaze-tracking systems using a saliency map around the optical axis of the eye. We assume that the peak of the average of the saliency maps indicates the visual axis of the eye in the eye coordinate system. We used both-eye restrictions effectively. The experimental result shows that the proposed system could estimate angle k without explicit personal calibration.
一种使用两个摄像头和两个光源的3D凝视跟踪方法可以在不需要用户校准的情况下测量眼睛的光轴。通过进行单点用户校准来估计眼睛的视觉轴(视线)。这种单点用户校准估计眼睛的光学轴和视觉轴之间偏移的角度k,这是一个与用户相关的参数。我们提出了一种隐式用户校准方法的注视跟踪系统使用周围的眼睛光轴的显著性地图。我们假设显著性图的平均值的峰值表示眼睛坐标系中眼睛的视觉轴。我们有效地使用了双眼限制。实验结果表明,该系统无需显式的个人校准即可估计出角度k。
{"title":"Implicit user calibration for gaze-tracking systems using an averaged saliency map around the optical axis of the eye","authors":"Mamoru Hiroe, Michiya Yamamoto, Takashi Nagamatsu","doi":"10.1145/3204493.3204572","DOIUrl":"https://doi.org/10.1145/3204493.3204572","url":null,"abstract":"A 3D gaze-tracking method that uses two cameras and two light sources can measure the optical axis of the eye without user calibration. The visual axis of the eye (line of sight) is estimated by conducting a single-point user calibration. This single-point user calibration estimates the angle k that is offset between the optical and visual axes of the eye, which is a user-dependent parameter. We have proposed an implicit user calibration method for gaze-tracking systems using a saliency map around the optical axis of the eye. We assume that the peak of the average of the saliency maps indicates the visual axis of the eye in the eye coordinate system. We used both-eye restrictions effectively. The experimental result shows that the proposed system could estimate angle k without explicit personal calibration.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129395377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
New features of scangraph: a tool for revealing participants' strategy from eye-movement data 扫描仪的新功能:一种从眼球运动数据中揭示参与者策略的工具
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208334
S. Popelka, J. Dolezalová, Marketa Beitlova
The demo describes new features of ScanGraph, an application intended for a finding of participants with a similar stimulus reading strategy based on the sequences of visited Areas of Interest. The result is visualised using cliques of a simple graph. ScanGraph was initially introduced in 2016. Since the original publication, new features were added. First of them is the implementation of Damerau-Levenshtein algorithm for similarity calculation. A heuristic algorithm for cliques finding used in the original version was replaced by the Bron-Kerbosch algorithm. ScanGraph reads data from open-source application OGAMA, and with the use of conversion tool also data from SMI BeGaze, which allows analysing dynamic stimuli as well. The most prominent enhancement is the possibility of similarity calculation among participants not only for a single stimulus but for multiple files at once.
该演示描述了ScanGraph的新功能,ScanGraph是一个应用程序,旨在根据访问过的兴趣区域的顺序,发现具有类似刺激阅读策略的参与者。结果使用简单图形的团来可视化。ScanGraph最初于2016年推出。自最初出版以来,增加了新的特性。首先是实现了用于相似度计算的Damerau-Levenshtein算法。用brown - kerbosch算法代替了原始版本中用于查找派系的启发式算法。ScanGraph从开源应用程序OGAMA读取数据,并使用转换工具从SMI BeGaze读取数据,这也允许分析动态刺激。最显著的增强是参与者之间的相似性计算的可能性,不仅针对单一刺激,而且同时针对多个文件。
{"title":"New features of scangraph: a tool for revealing participants' strategy from eye-movement data","authors":"S. Popelka, J. Dolezalová, Marketa Beitlova","doi":"10.1145/3204493.3208334","DOIUrl":"https://doi.org/10.1145/3204493.3208334","url":null,"abstract":"The demo describes new features of ScanGraph, an application intended for a finding of participants with a similar stimulus reading strategy based on the sequences of visited Areas of Interest. The result is visualised using cliques of a simple graph. ScanGraph was initially introduced in 2016. Since the original publication, new features were added. First of them is the implementation of Damerau-Levenshtein algorithm for similarity calculation. A heuristic algorithm for cliques finding used in the original version was replaced by the Bron-Kerbosch algorithm. ScanGraph reads data from open-source application OGAMA, and with the use of conversion tool also data from SMI BeGaze, which allows analysing dynamic stimuli as well. The most prominent enhancement is the possibility of similarity calculation among participants not only for a single stimulus but for multiple files at once.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115254104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Capturing real-world gaze behaviour: live and unplugged 捕捉真实世界的凝视行为:实时和不插电
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204528
Karishma Singh, Mahmoud Kalash, Neil D. B. Bruce
Understanding human gaze behaviour has benefits from scientific understanding to many application domains. Current practices constrain possible use cases, requiring experimentation restricted to a lab setting or controlled environment. In this paper, we demonstrate a flexible unconstrained end-to-end solution that allows for collection and analysis of gaze data in real-world settings. To achieve these objectives, rich 3D models of the real world are derived along with strategies for associating experimental eye-tracking data with these models. In particular, we demonstrate the strength of photogrammetry in allowing these capabilities to be realized, and demonstrate the first complete solution for 3D gaze analysis in large-scale outdoor environments using standard camera technology without fiducial markers. The paper also presents techniques for quantitative analysis and visualization of 3D gaze data. As a whole, the body of techniques presented provides a foundation for future research, with new opportunities for experimental studies and computational modeling efforts.
理解人类凝视行为有利于科学理解许多应用领域。当前的实践限制了可能的用例,需要在实验室设置或受控环境中进行实验。在本文中,我们展示了一个灵活的无约束端到端解决方案,允许在现实世界中收集和分析凝视数据。为了实现这些目标,我们导出了丰富的真实世界的3D模型,并将实验眼动追踪数据与这些模型相关联。特别是,我们展示了摄影测量学在实现这些功能方面的优势,并展示了在大规模户外环境中使用标准相机技术进行3D凝视分析的第一个完整解决方案,没有基准标记。本文还介绍了三维注视数据的定量分析和可视化技术。总的来说,所提出的技术为未来的研究提供了基础,为实验研究和计算建模工作提供了新的机会。
{"title":"Capturing real-world gaze behaviour: live and unplugged","authors":"Karishma Singh, Mahmoud Kalash, Neil D. B. Bruce","doi":"10.1145/3204493.3204528","DOIUrl":"https://doi.org/10.1145/3204493.3204528","url":null,"abstract":"Understanding human gaze behaviour has benefits from scientific understanding to many application domains. Current practices constrain possible use cases, requiring experimentation restricted to a lab setting or controlled environment. In this paper, we demonstrate a flexible unconstrained end-to-end solution that allows for collection and analysis of gaze data in real-world settings. To achieve these objectives, rich 3D models of the real world are derived along with strategies for associating experimental eye-tracking data with these models. In particular, we demonstrate the strength of photogrammetry in allowing these capabilities to be realized, and demonstrate the first complete solution for 3D gaze analysis in large-scale outdoor environments using standard camera technology without fiducial markers. The paper also presents techniques for quantitative analysis and visualization of 3D gaze data. As a whole, the body of techniques presented provides a foundation for future research, with new opportunities for experimental studies and computational modeling efforts.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125756811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Evaluating gender difference on algorithmic problems using eye-tracker 用眼动仪评估算法问题的性别差异
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204537
U. Obaidellah, Mohammed Al Haek
Gender differences in programming comprehension has been a topic of discussion in recent years. We conducted an eye-tracking study on 51(21 female, 30 male) computer science undergraduate university students to examine their cognitive processes in pseudocode comprehension. We aim to identify their reading strategies and eye gaze behavior on the comprehension of pseudocodes in terms of performance and visual effort when solving algorithmic problems of varying difficulty levels. Each student completed a series of tasks requiring them to rearrange randomized pseudocode statements in a correct order for the problem presented. Our results indicated that the speed of analyzing the problems were faster among male students, although female students fixated longer in understanding the problem requirements. In addition, female students more commonly fixated on indicative verbs (i.e., prompt, print), while male students fixated more on operational statements (i.e., loops, variables calculations, file handling).
近年来,编程理解中的性别差异一直是一个讨论的话题。本文对51名(21名女性,30名男性)计算机科学本科学生进行了眼动追踪研究,以考察他们在伪代码理解中的认知过程。我们的目的是在解决不同难度的算法问题时,从性能和视觉努力方面确定他们的阅读策略和眼睛注视行为对伪代码的理解。每个学生都完成了一系列的任务,要求他们按照问题的正确顺序重新排列随机的伪代码语句。我们的研究结果表明,男学生分析问题的速度更快,而女学生在理解问题要求上的注视时间更长。此外,女生更关注指示动词(提示、打印),而男生更关注操作语句(循环、变量计算、文件处理)。
{"title":"Evaluating gender difference on algorithmic problems using eye-tracker","authors":"U. Obaidellah, Mohammed Al Haek","doi":"10.1145/3204493.3204537","DOIUrl":"https://doi.org/10.1145/3204493.3204537","url":null,"abstract":"Gender differences in programming comprehension has been a topic of discussion in recent years. We conducted an eye-tracking study on 51(21 female, 30 male) computer science undergraduate university students to examine their cognitive processes in pseudocode comprehension. We aim to identify their reading strategies and eye gaze behavior on the comprehension of pseudocodes in terms of performance and visual effort when solving algorithmic problems of varying difficulty levels. Each student completed a series of tasks requiring them to rearrange randomized pseudocode statements in a correct order for the problem presented. Our results indicated that the speed of analyzing the problems were faster among male students, although female students fixated longer in understanding the problem requirements. In addition, female students more commonly fixated on indicative verbs (i.e., prompt, print), while male students fixated more on operational statements (i.e., loops, variables calculations, file handling).","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"14 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113932760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Eyemic Eyemic
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208342
Shaharam Eivazi, Maximilian Maurer
The concept of hands free surgical microscope has become increasingly popular in the domain of microsurgery. The higher magnification, the smaller field of view, necessitates frequent interaction with the microscope during an operation. Researchers showed that manual (hand) interactions with a surgical microscope resulted in disruptive and hazardous situations. Previously, we proposed the idea of eye control microscope as a solution to this interaction problem. While gaze contingent applications have been widely studied in HCI and eye tracking domain the lack of ocular based eye trackers for microscope being an important concern in this domain. To solve this critical problem and provide opportunity to capture eye movements in microsurgery in real time we present EyeMic, a binocular eye tracker that can be attached on top of any microscope ocular. Our eye tracker has only 5mm height to grantee same field of view, and it supports up to 120 frame per second eye movement recording.
{"title":"Eyemic","authors":"Shaharam Eivazi, Maximilian Maurer","doi":"10.1145/3204493.3208342","DOIUrl":"https://doi.org/10.1145/3204493.3208342","url":null,"abstract":"The concept of hands free surgical microscope has become increasingly popular in the domain of microsurgery. The higher magnification, the smaller field of view, necessitates frequent interaction with the microscope during an operation. Researchers showed that manual (hand) interactions with a surgical microscope resulted in disruptive and hazardous situations. Previously, we proposed the idea of eye control microscope as a solution to this interaction problem. While gaze contingent applications have been widely studied in HCI and eye tracking domain the lack of ocular based eye trackers for microscope being an important concern in this domain. To solve this critical problem and provide opportunity to capture eye movements in microsurgery in real time we present EyeMic, a binocular eye tracker that can be attached on top of any microscope ocular. Our eye tracker has only 5mm height to grantee same field of view, and it supports up to 120 frame per second eye movement recording.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"78 26","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113944410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Developing photo-sensor oculography (PS-OG) system for virtual reality headsets 开发用于虚拟现实头戴式设备的光感视觉(PS-OG)系统
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208341
R. Zemblys, Oleg V. Komogortsev
Virtual reality (VR) is employed in a variety of different applications. It is our belief that eye-tracking is going to be a part of the majority of VR devices that will reduce computational burden via a technique called foveated rendering and will increase the immersion of the VR environment. A promising technique to achieve low energy, fast, and accurate eye-tracking is photo-sensor oculography (PS-OG). PS-OG technology enables tracking a user's gaze location at very fast rates - 1000Hz or more, and is expected to consume several orders of magnitude less power compared to a traditional video-oculography approach. In this demo we present a prototype of a PS-OG system that we started to develop. The long-term aim of our project is to develop a PS-OG system that is robust to sensor shifts. As a first step we have built a prototype that allows us to test different sensors and their configurations, as well as record and analyze eye-movement data.
虚拟现实(VR)被用于各种不同的应用。我们相信,眼球追踪将成为大多数VR设备的一部分,通过一种称为注视点渲染的技术,它将减少计算负担,并将增加VR环境的沉浸感。光传感器眼动技术(PS-OG)是一种很有前途的低能耗、快速、准确的眼动追踪技术。PS-OG技术能够以非常快的速率(1000Hz或更高)跟踪用户的凝视位置,并且与传统的视频视觉方法相比,预计将消耗几个数量级的功率。在这个演示中,我们展示了我们开始开发的PS-OG系统的原型。我们项目的长期目标是开发一种对传感器位移具有鲁棒性的PS-OG系统。作为第一步,我们已经建立了一个原型,使我们能够测试不同的传感器及其配置,以及记录和分析眼球运动数据。
{"title":"Developing photo-sensor oculography (PS-OG) system for virtual reality headsets","authors":"R. Zemblys, Oleg V. Komogortsev","doi":"10.1145/3204493.3208341","DOIUrl":"https://doi.org/10.1145/3204493.3208341","url":null,"abstract":"Virtual reality (VR) is employed in a variety of different applications. It is our belief that eye-tracking is going to be a part of the majority of VR devices that will reduce computational burden via a technique called foveated rendering and will increase the immersion of the VR environment. A promising technique to achieve low energy, fast, and accurate eye-tracking is photo-sensor oculography (PS-OG). PS-OG technology enables tracking a user's gaze location at very fast rates - 1000Hz or more, and is expected to consume several orders of magnitude less power compared to a traditional video-oculography approach. In this demo we present a prototype of a PS-OG system that we started to develop. The long-term aim of our project is to develop a PS-OG system that is robust to sensor shifts. As a first step we have built a prototype that allows us to test different sensors and their configurations, as well as record and analyze eye-movement data.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129866247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Enhanced representation of web pages for usability analysis with eye tracking 通过眼动追踪增强网页可用性分析的表示
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204535
Raphael Menges, Hanadi Tamimi, C. Kumar, T. Walber, Christoph Schaefer, Steffen Staab
Eye tracking as a tool to quantify user attention plays a major role in research and application design. For Web page usability, it has become a prominent measure to assess which sections of a Web page are read, glanced or skipped. Such assessments primarily depend on the mapping of gaze data to a Web page representation. However, current representation methods, a virtual screenshot of the Web page or a video recording of the complete interaction session, suffer either from accuracy or scalability issues. We present a method that identifies fixed elements on Web pages and combines user viewport screenshots in relation to fixed elements for an enhanced representation of the page. We conducted an experiment with 10 participants and the results signify that analysis with our method is more efficient than a video recording, which is an essential criterion for large scale Web studies.
眼动追踪作为一种量化用户注意力的工具,在研究和应用设计中发挥着重要作用。对于Web页面的可用性,它已经成为评估Web页面的哪些部分被阅读、浏览或跳过的重要指标。这种评估主要依赖于注视数据到Web页面表示的映射。但是,当前的表示方法(Web页面的虚拟屏幕截图或完整交互会话的视频记录)存在准确性或可伸缩性问题。我们提出了一种方法,可以识别Web页面上的固定元素,并将用户视口屏幕截图与固定元素相结合,以增强页面的表示。我们进行了一个有10名参与者的实验,结果表明,用我们的方法进行分析比视频记录更有效,而视频记录是大规模网络研究的基本标准。
{"title":"Enhanced representation of web pages for usability analysis with eye tracking","authors":"Raphael Menges, Hanadi Tamimi, C. Kumar, T. Walber, Christoph Schaefer, Steffen Staab","doi":"10.1145/3204493.3204535","DOIUrl":"https://doi.org/10.1145/3204493.3204535","url":null,"abstract":"Eye tracking as a tool to quantify user attention plays a major role in research and application design. For Web page usability, it has become a prominent measure to assess which sections of a Web page are read, glanced or skipped. Such assessments primarily depend on the mapping of gaze data to a Web page representation. However, current representation methods, a virtual screenshot of the Web page or a video recording of the complete interaction session, suffer either from accuracy or scalability issues. We present a method that identifies fixed elements on Web pages and combines user viewport screenshots in relation to fixed elements for an enhanced representation of the page. We conducted an experiment with 10 participants and the results signify that analysis with our method is more efficient than a video recording, which is an essential criterion for large scale Web studies.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"395 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131948126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
I see what you see: gaze awareness in mobile video collaboration 我看到了你所看到的:移动视频协作中的凝视意识
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204542
Deepak Akkil, Biju Thankachan, Poika Isokoski
An emerging use of mobile video telephony is to enable joint activities and collaboration on physical tasks. We conducted a controlled user study to understand if seeing the gaze of a remote instructor is beneficial for mobile video collaboration and if it is valuable that the instructor is aware of sharing of the gaze. We compared three gaze sharing configurations, (a) Gaze_Visible where the instructor is aware and can view own gaze point that is being shared, (b) Gaze_Invisible where the instructor is aware of the shared gaze but cannot view her own gaze point and (c) Gaze_Unaware where the instructor is unaware about the gaze sharing, with a baseline of shared-mouse pointer. Our results suggests that naturally occurring gaze may not be as useful as explicitly produced eye movements. Further, instructors prefer using mouse rather than gaze for remote gesturing, while the workers also find value in transferring the gaze information.
移动视频电话的一个新兴用途是使联合活动和协作能够完成物理任务。我们进行了一项受控用户研究,以了解看到远程教师的目光是否有利于移动视频协作,以及教师意识到共享目光是否有价值。我们比较了三种凝视共享配置,(a) Gaze_Visible(指导员意识到共享的凝视,可以看到自己的凝视点),(b) Gaze_Invisible(指导员意识到共享的凝视,但不能看到自己的凝视点)和(c) gaze_unknown(指导员不知道视线共享,以共享鼠标指针为基准)。我们的研究结果表明,自然发生的凝视可能不如明确产生的眼球运动有用。此外,教师更喜欢使用鼠标而不是凝视进行远程手势,而员工也发现了传递凝视信息的价值。
{"title":"I see what you see: gaze awareness in mobile video collaboration","authors":"Deepak Akkil, Biju Thankachan, Poika Isokoski","doi":"10.1145/3204493.3204542","DOIUrl":"https://doi.org/10.1145/3204493.3204542","url":null,"abstract":"An emerging use of mobile video telephony is to enable joint activities and collaboration on physical tasks. We conducted a controlled user study to understand if seeing the gaze of a remote instructor is beneficial for mobile video collaboration and if it is valuable that the instructor is aware of sharing of the gaze. We compared three gaze sharing configurations, (a) Gaze_Visible where the instructor is aware and can view own gaze point that is being shared, (b) Gaze_Invisible where the instructor is aware of the shared gaze but cannot view her own gaze point and (c) Gaze_Unaware where the instructor is unaware about the gaze sharing, with a baseline of shared-mouse pointer. Our results suggests that naturally occurring gaze may not be as useful as explicitly produced eye movements. Further, instructors prefer using mouse rather than gaze for remote gesturing, while the workers also find value in transferring the gaze information.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133719246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Towards concise gaze sharing 走向简洁的凝视分享
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3207416
C. Schlösser
Computer-supported collaboration changed the way we learn and work together, as co-location is no longer a necessity. While presence, pointing and actions belong to the established inventory of awareness functionality which aims to inform about peer activities, visual attention as a beneficial cue for successful collaboration does not. Several studies have shown that providing real-time gaze cues is advantageous, as it enables more efficient referencing by reducing deictic expressions and fosters joint attention by facilitating shared gaze. But the actual use is held back due to its inherent limitations: Real-time gaze display is often considered distracting, which is caused by its constant movement and an overall low signal-to-noise ratio. As a result, the transient nature makes it difficult to associate with a dynamic stimulus over time. While it is helpful when referencing or shared gaze is crucial, the application in common collaborative environments with a constant alternation between close and loose collaboration presents challenges. My dissertation work will explore a novelty gaze sharing approach, that aims to detect task-related gaze patterns which are displayed in concise representations. This work will contribute to our understanding of coordination in collaborative environments and propose algorithms and design recommendations for gaze sharing.
计算机支持的协作改变了我们学习和一起工作的方式,因为协同办公不再是必要的。虽然存在、指向和行动属于旨在告知同伴活动的已建立的意识功能清单,但视觉注意力作为成功协作的有益线索却不是。几项研究表明,提供实时凝视线索是有利的,因为它可以通过减少指示性表达来提高参考效率,并通过促进共享凝视来促进共同注意力。但由于其固有的局限性,实际应用受到了阻碍:实时凝视显示通常被认为是分散注意力的,这是由它的持续移动和整体低信噪比造成的。因此,短暂性使得它很难随时间与动态刺激联系起来。当引用或共享注视至关重要时,它是有帮助的,但是在紧密协作和松散协作之间不断交替的公共协作环境中的应用程序提出了挑战。我的论文工作将探索一种新颖的凝视共享方法,旨在检测以简洁表示显示的与任务相关的凝视模式。这项工作将有助于我们理解协作环境中的协调,并为凝视共享提出算法和设计建议。
{"title":"Towards concise gaze sharing","authors":"C. Schlösser","doi":"10.1145/3204493.3207416","DOIUrl":"https://doi.org/10.1145/3204493.3207416","url":null,"abstract":"Computer-supported collaboration changed the way we learn and work together, as co-location is no longer a necessity. While presence, pointing and actions belong to the established inventory of awareness functionality which aims to inform about peer activities, visual attention as a beneficial cue for successful collaboration does not. Several studies have shown that providing real-time gaze cues is advantageous, as it enables more efficient referencing by reducing deictic expressions and fosters joint attention by facilitating shared gaze. But the actual use is held back due to its inherent limitations: Real-time gaze display is often considered distracting, which is caused by its constant movement and an overall low signal-to-noise ratio. As a result, the transient nature makes it difficult to associate with a dynamic stimulus over time. While it is helpful when referencing or shared gaze is crucial, the application in common collaborative environments with a constant alternation between close and loose collaboration presents challenges. My dissertation work will explore a novelty gaze sharing approach, that aims to detect task-related gaze patterns which are displayed in concise representations. This work will contribute to our understanding of coordination in collaborative environments and propose algorithms and design recommendations for gaze sharing.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114865441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1