首页 > 最新文献

Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications最新文献

英文 中文
Training facilitates cognitive control on pupil dilation 训练有助于瞳孔扩张的认知控制
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204570
Jan Ehlers, Christoph Strauch, A. Huckauf
Physiological responses are generally involuntary; however, real-time feedback enables, at least to a certain extent, to voluntary control automatic processes. Recently, it was demonstrated that even pupil dilation is subject to controlled interference. To address effects of training on the ability to exercise control on pupil dilation, the current study examines repeated exercise over seven successive days. Participants utilize self-induced changes in arousal to increase pupil diameter, real-time feedback was applied to evaluate and improve individual performance. We observe inter-individual differences with regard to responsiveness of the pupillary response: six of eight participants considerably increase pupil diameter already during the first session, two exhibit only slight changes, and all showed rather stable performance throughout training. There was a trend towards stronger peak amplitudes that tend to occur increasingly early across time. Hence, higher cognitive control on pupil dilations can be practiced by most users and may therefore provide an appropriate input mechanism in human-computer interaction.
生理反应通常是无意识的;然而,实时反馈至少在一定程度上能够自发地控制自动过程。最近有研究表明,瞳孔扩张也受到可控干扰。为了研究训练对瞳孔扩张控制能力的影响,本研究检查了连续7天的重复训练。参与者利用自我诱导的觉醒变化来增加瞳孔直径,实时反馈应用于评估和提高个人表现。我们观察到瞳孔反应的个体差异:8名参与者中有6人在第一次训练中瞳孔直径就已经明显增加,2人只有轻微的变化,所有人在整个训练过程中都表现出相当稳定的表现。随着时间的推移,出现越来越早的更强峰值振幅的趋势。因此,大多数用户可以对瞳孔扩张进行更高的认知控制,因此可能为人机交互提供适当的输入机制。
{"title":"Training facilitates cognitive control on pupil dilation","authors":"Jan Ehlers, Christoph Strauch, A. Huckauf","doi":"10.1145/3204493.3204570","DOIUrl":"https://doi.org/10.1145/3204493.3204570","url":null,"abstract":"Physiological responses are generally involuntary; however, real-time feedback enables, at least to a certain extent, to voluntary control automatic processes. Recently, it was demonstrated that even pupil dilation is subject to controlled interference. To address effects of training on the ability to exercise control on pupil dilation, the current study examines repeated exercise over seven successive days. Participants utilize self-induced changes in arousal to increase pupil diameter, real-time feedback was applied to evaluate and improve individual performance. We observe inter-individual differences with regard to responsiveness of the pupillary response: six of eight participants considerably increase pupil diameter already during the first session, two exhibit only slight changes, and all showed rather stable performance throughout training. There was a trend towards stronger peak amplitudes that tend to occur increasingly early across time. Hence, higher cognitive control on pupil dilations can be practiced by most users and may therefore provide an appropriate input mechanism in human-computer interaction.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"PE-1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126755736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A novel approach to single camera, glint-free 3D eye model fitting including corneal refraction 一种新的单摄像头、无闪烁3D眼模型拟合方法,包括角膜屈光
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204525
K. Dierkes, Moritz Kassner, A. Bulling
Model-based methods for glint-free gaze estimation typically infer eye pose using pupil contours extracted from eye images. Existing methods, however, either ignore or require complex hardware setups to deal with refraction effects occurring at the corneal interfaces. In this work we provide a detailed analysis of the effects of refraction in glint-free gaze estimation using a single near-eye camera, based on the method presented by [Świrski and Dodgson 2013]. We demonstrate systematic deviations in inferred eyeball positions and gaze directions with respect to synthetic ground-truth data and show that ignoring corneal refraction can result in angular errors of several degrees. Furthermore, we quantify gaze direction dependent errors in pupil radius estimates. We propose a novel approach to account for corneal refraction in 3D eye model fitting and by analyzing synthetic and real images show that our new method successfully captures refraction effects and helps to overcome the shortcomings of the state of the art approach.
基于模型的无闪烁注视估计方法通常使用从眼睛图像中提取的瞳孔轮廓来推断眼睛姿势。然而,现有的方法要么忽略,要么需要复杂的硬件设置来处理发生在角膜界面的折射效应。在这项工作中,我们基于[Świrski和Dodgson 2013]提出的方法,详细分析了使用单个近眼相机进行无闪烁凝视估计时折射的影响。我们展示了在推断眼球位置和凝视方向方面的系统性偏差,并表明忽略角膜折射会导致数度的角度误差。此外,我们量化了瞳孔半径估计中的凝视方向相关误差。我们提出了一种在3D眼模型拟合中考虑角膜屈光的新方法,并通过分析合成图像和真实图像表明,我们的新方法成功地捕获了屈光效应,并有助于克服当前方法的缺点。
{"title":"A novel approach to single camera, glint-free 3D eye model fitting including corneal refraction","authors":"K. Dierkes, Moritz Kassner, A. Bulling","doi":"10.1145/3204493.3204525","DOIUrl":"https://doi.org/10.1145/3204493.3204525","url":null,"abstract":"Model-based methods for glint-free gaze estimation typically infer eye pose using pupil contours extracted from eye images. Existing methods, however, either ignore or require complex hardware setups to deal with refraction effects occurring at the corneal interfaces. In this work we provide a detailed analysis of the effects of refraction in glint-free gaze estimation using a single near-eye camera, based on the method presented by [Świrski and Dodgson 2013]. We demonstrate systematic deviations in inferred eyeball positions and gaze directions with respect to synthetic ground-truth data and show that ignoring corneal refraction can result in angular errors of several degrees. Furthermore, we quantify gaze direction dependent errors in pupil radius estimates. We propose a novel approach to account for corneal refraction in 3D eye model fitting and by analyzing synthetic and real images show that our new method successfully captures refraction effects and helps to overcome the shortcomings of the state of the art approach.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133821972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Towards gaze-based quantification of the security of graphical authentication schemes 基于注视的图形认证方案安全性量化研究
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204589
C. Katsini, G. Raptis, C. Fidas, N. Avouris
In this paper, we introduce a two-step method for estimating the strength of user-created graphical passwords based on the eye-gaze behaviour during password composition. First, the individuals' gaze patterns, represented by the unique fixations on each area of interest (AOI) and the total fixation duration per AOI, are calculated. Second, the gaze-based entropy of the individual is calculated. To investigate whether the proposed metric is a credible predictor of the password strength, we conducted two feasibility studies. Results revealed a strong positive correlation between the strength of the created passwords and the gaze-based entropy. Hence, we argue that the proposed gaze-based metric allows for unobtrusive prediction of the strength of the password a user is going to create and enables intervention to the password composition for helping users create stronger passwords.
在本文中,我们介绍了一种基于人眼注视行为的两步法来估计用户创建的图形密码的强度。首先,计算个体的注视模式,即对每个感兴趣区域(AOI)的独特注视和每次AOI的总注视时间。其次,计算个体注视熵;为了调查所提出的度量是否是密码强度的可靠预测因子,我们进行了两项可行性研究。结果显示,创建密码的强度与基于注视的熵之间存在很强的正相关。因此,我们认为提出的基于注视的度量允许对用户将要创建的密码的强度进行不显眼的预测,并允许对密码组合进行干预,以帮助用户创建更强的密码。
{"title":"Towards gaze-based quantification of the security of graphical authentication schemes","authors":"C. Katsini, G. Raptis, C. Fidas, N. Avouris","doi":"10.1145/3204493.3204589","DOIUrl":"https://doi.org/10.1145/3204493.3204589","url":null,"abstract":"In this paper, we introduce a two-step method for estimating the strength of user-created graphical passwords based on the eye-gaze behaviour during password composition. First, the individuals' gaze patterns, represented by the unique fixations on each area of interest (AOI) and the total fixation duration per AOI, are calculated. Second, the gaze-based entropy of the individual is calculated. To investigate whether the proposed metric is a credible predictor of the password strength, we conducted two feasibility studies. Results revealed a strong positive correlation between the strength of the created passwords and the gaze-based entropy. Hence, we argue that the proposed gaze-based metric allows for unobtrusive prediction of the strength of the password a user is going to create and enables intervention to the password composition for helping users create stronger passwords.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132989370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Hands-free web browsing: enriching the user experience with gaze and voice modality 免提网页浏览:以凝视和语音的方式丰富用户体验
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208338
Korok Sengupta, Min Ke, Raphael Menges, C. Kumar, Steffen Staab
Hands-free browsers provide an effective tool for Web interaction and accessibility, overcoming the need for conventional input sources. Current approaches to hands-free interaction are primarily categorized in either voice or gaze-based modality. In this work, we investigate how these two modalities could be integrated to provide a better hands-free experience for end-users. We demonstrate a multimodal browsing approach combining eye gaze and voice inputs for optimized interaction, and to suffice user preferences with unimodal benefits. The initial assessment with five participants indicates improved performance for the multimodal prototype in comparison to single modalities for hands-free Web browsing.
免提浏览器为Web交互和可访问性提供了有效的工具,克服了对传统输入源的需求。目前的免提交互方式主要分为语音和凝视两大类。在这项工作中,我们研究了如何将这两种模式整合起来,为最终用户提供更好的免提体验。我们展示了一种结合眼睛注视和语音输入的多模式浏览方法,以优化交互,并满足用户偏好和单模态的好处。对五名参与者的初步评估表明,在免提浏览网页时,与单一模式相比,多模式原型的性能有所提高。
{"title":"Hands-free web browsing: enriching the user experience with gaze and voice modality","authors":"Korok Sengupta, Min Ke, Raphael Menges, C. Kumar, Steffen Staab","doi":"10.1145/3204493.3208338","DOIUrl":"https://doi.org/10.1145/3204493.3208338","url":null,"abstract":"Hands-free browsers provide an effective tool for Web interaction and accessibility, overcoming the need for conventional input sources. Current approaches to hands-free interaction are primarily categorized in either voice or gaze-based modality. In this work, we investigate how these two modalities could be integrated to provide a better hands-free experience for end-users. We demonstrate a multimodal browsing approach combining eye gaze and voice inputs for optimized interaction, and to suffice user preferences with unimodal benefits. The initial assessment with five participants indicates improved performance for the multimodal prototype in comparison to single modalities for hands-free Web browsing.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123900843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Anyorbit: orbital navigation in virtual environments with eye-tracking Anyorbit:在虚拟环境中进行眼球追踪的轨道导航
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3209579
B. Outram, Yun Suen Pai, Tanner Person, K. Minamizawa, K. Kunze
Gaze-based interactions promise to be fast, intuitive and effective in controlling virtual and augmented environments. Yet, there is still a lack of usable 3D navigation and observation techniques. In this work: 1) We introduce a highly advantageous orbital navigation technique, AnyOrbit, providing an intuitive and hands-free method of observation in virtual environments that uses eye-tracking to control the orbital center of movement; 2) The versatility of the technique is demonstrated with several control schemes and use-cases in virtual/augmented reality head-mounted-display and desktop setups, including observation of 3D astronomical data and spectator sports.
基于注视的交互有望在控制虚拟和增强环境方面快速、直观和有效。然而,仍然缺乏可用的3D导航和观测技术。在这项工作中:1)我们引入了一种非常有优势的轨道导航技术AnyOrbit,提供了一种在虚拟环境中使用眼动来控制轨道运动中心的直观和免提的观察方法;2)该技术的多功能性通过虚拟/增强现实头戴式显示器和桌面设置中的几种控制方案和用例进行了演示,包括观察3D天文数据和观赏性运动。
{"title":"Anyorbit: orbital navigation in virtual environments with eye-tracking","authors":"B. Outram, Yun Suen Pai, Tanner Person, K. Minamizawa, K. Kunze","doi":"10.1145/3204493.3209579","DOIUrl":"https://doi.org/10.1145/3204493.3209579","url":null,"abstract":"Gaze-based interactions promise to be fast, intuitive and effective in controlling virtual and augmented environments. Yet, there is still a lack of usable 3D navigation and observation techniques. In this work: 1) We introduce a highly advantageous orbital navigation technique, AnyOrbit, providing an intuitive and hands-free method of observation in virtual environments that uses eye-tracking to control the orbital center of movement; 2) The versatility of the technique is demonstrated with several control schemes and use-cases in virtual/augmented reality head-mounted-display and desktop setups, including observation of 3D astronomical data and spectator sports.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"506 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123199975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
iTrace iTrace
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208343
Drew T. Guarnera, Corey A. Bryant, Ashwin Mishra, Jonathan I. Maletic, Bonita Sharif
The paper presents iTrace, an eye tracking infrastructure, that enables eye tracking in development environments such as Visual Studio and Eclipse. Software developers work with software that is comprised of numerous source code files. This requires frequent switching between project artifacts during program understanding or debugging activities. Additionally, the amount of content contained within each artifact can be quite large and require scrolling or navigation of the content. Current approaches to eye tracking are meant for fixed stimuli and struggle to capture context during these activities. iTrace overcomes these limitations allowing developers to work in realistic settings during an eye tracking study. The iTrace architecture is presented along with several use cases of where it can be used by researchers. A short video demonstration is available at https://youtu.be/AmrLWgw4OEs
{"title":"iTrace","authors":"Drew T. Guarnera, Corey A. Bryant, Ashwin Mishra, Jonathan I. Maletic, Bonita Sharif","doi":"10.1145/3204493.3208343","DOIUrl":"https://doi.org/10.1145/3204493.3208343","url":null,"abstract":"The paper presents iTrace, an eye tracking infrastructure, that enables eye tracking in development environments such as Visual Studio and Eclipse. Software developers work with software that is comprised of numerous source code files. This requires frequent switching between project artifacts during program understanding or debugging activities. Additionally, the amount of content contained within each artifact can be quite large and require scrolling or navigation of the content. Current approaches to eye tracking are meant for fixed stimuli and struggle to capture context during these activities. iTrace overcomes these limitations allowing developers to work in realistic settings during an eye tracking study. The iTrace architecture is presented along with several use cases of where it can be used by researchers. A short video demonstration is available at https://youtu.be/AmrLWgw4OEs","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123514068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SLAM-based localization of 3D gaze using a mobile eye tracker 基于slam的移动眼动仪3D注视定位
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204584
Haofei Wang, Jimin Pi, Tong Qin, S. Shen, Bertram E. Shi
Past work in eye tracking has focused on estimating gaze targets in two dimensions (2D), e.g. on a computer screen or scene camera image. Three-dimensional (3D) gaze estimates would be extremely useful when humans are mobile and interacting with the real 3D environment. We describe a system for estimating the 3D locations of gaze using a mobile eye tracker. The system integrates estimates of the user's gaze vector from a mobile eye tracker, estimates of the eye tracker pose from a visual-inertial simultaneous localization and mapping (SLAM) algorithm, a 3D point cloud map of the environment from a RGB-D sensor. Experimental results indicate that our system produces accurate estimates of 3D gaze over a much larger range than remote eye trackers. Our system will enable applications, such as the analysis of 3D human attention and more anticipative human robot interfaces.
过去的眼动追踪工作主要集中在估计二维(2D)的凝视目标,例如计算机屏幕或场景摄像机图像。当人类移动并与真实的3D环境互动时,三维(3D)凝视估计将非常有用。我们描述了一个使用移动眼动仪估计凝视的3D位置的系统。该系统集成了来自移动眼动仪的用户凝视矢量估计,来自视觉惯性同步定位和映射(SLAM)算法的眼动仪姿态估计,以及来自RGB-D传感器的环境3D点云图。实验结果表明,与远程眼动仪相比,我们的系统可以在更大的范围内准确估计3D凝视。我们的系统将使应用程序,如三维人类注意力的分析和更预期的人机界面。
{"title":"SLAM-based localization of 3D gaze using a mobile eye tracker","authors":"Haofei Wang, Jimin Pi, Tong Qin, S. Shen, Bertram E. Shi","doi":"10.1145/3204493.3204584","DOIUrl":"https://doi.org/10.1145/3204493.3204584","url":null,"abstract":"Past work in eye tracking has focused on estimating gaze targets in two dimensions (2D), e.g. on a computer screen or scene camera image. Three-dimensional (3D) gaze estimates would be extremely useful when humans are mobile and interacting with the real 3D environment. We describe a system for estimating the 3D locations of gaze using a mobile eye tracker. The system integrates estimates of the user's gaze vector from a mobile eye tracker, estimates of the eye tracker pose from a visual-inertial simultaneous localization and mapping (SLAM) algorithm, a 3D point cloud map of the environment from a RGB-D sensor. Experimental results indicate that our system produces accurate estimates of 3D gaze over a much larger range than remote eye trackers. Our system will enable applications, such as the analysis of 3D human attention and more anticipative human robot interfaces.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127245955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
A gaze-contingent intention decoding engine for human augmentation 一种用于人类增强的注视偶然意图解码引擎
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208350
Pavel Orlov, A. Shafti, C. Auepanwiriyakul, Noyan Songur, A. Faisal
Humans process high volumes of visual information to perform everyday tasks. In a reaching task, the brain estimates the distance and position of the object of interest, to reach for it. Having a grasp intention in mind, human eye-movements produce specific relevant patterns. Our Gaze-Contingent Intention Decoding Engine uses eye-movement data and gaze-point position to indicate the hidden intention. We detect the object of interest using deep convolution neural networks and estimate its position in a physical space using 3D gaze vectors. Then we trigger the possible actions from an action grammar database to perform an assistive movement of the robotic arm, improving action performance in physically disabled people. This document is a short report to accompany the Gaze-contingent Intention Decoding Engine demonstrator, providing details of the setup used and results obtained.
人类处理大量的视觉信息来完成日常任务。在伸手任务中,大脑估计感兴趣的物体的距离和位置,然后伸手去够它。有了抓握的意图,人类的眼球运动就会产生特定的相关模式。我们的注视-偶然意图解码引擎使用眼球运动数据和注视点位置来指示隐藏的意图。我们使用深度卷积神经网络检测感兴趣的对象,并使用3D凝视向量估计其在物理空间中的位置。然后我们从动作语法数据库中触发可能的动作来执行机械臂的辅助运动,从而提高身体残疾人士的动作表现。本文档是一份简短的报告,附带了注视偶然意图解码引擎演示,提供了使用的设置和获得的结果的详细信息。
{"title":"A gaze-contingent intention decoding engine for human augmentation","authors":"Pavel Orlov, A. Shafti, C. Auepanwiriyakul, Noyan Songur, A. Faisal","doi":"10.1145/3204493.3208350","DOIUrl":"https://doi.org/10.1145/3204493.3208350","url":null,"abstract":"Humans process high volumes of visual information to perform everyday tasks. In a reaching task, the brain estimates the distance and position of the object of interest, to reach for it. Having a grasp intention in mind, human eye-movements produce specific relevant patterns. Our Gaze-Contingent Intention Decoding Engine uses eye-movement data and gaze-point position to indicate the hidden intention. We detect the object of interest using deep convolution neural networks and estimate its position in a physical space using 3D gaze vectors. Then we trigger the possible actions from an action grammar database to perform an assistive movement of the robotic arm, improving action performance in physically disabled people. This document is a short report to accompany the Gaze-contingent Intention Decoding Engine demonstrator, providing details of the setup used and results obtained.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"198 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121841567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Seeing into the music score: eye-tracking and sight-reading in a choral context 看透乐谱:合唱情境下的眼动追踪与视读
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3207415
M. Timoshenko
Musical sight-reading is a complex task which requires fluent use of multiple types of skills and knowledge. The ability to sight-read a score is typically described as one of the most challenging aims for beginners and finding ways of scaffolding their learning is, therefore, an important task for researchers in music education. The purpose of this study is to provide a deeper understanding of how an application of eye tracking technology can be utilized to improve choir singers' sight-reading ability. Collected data of novices' sight-reading patterns during choral rehearsal have helped identify problems that singers are facing. Analyzing corresponding patterns in sight-reading performed by expert singers may provide valuable information about helpful strategies developed with increasing experience. This project is expected to generate an approximate model, similar to the experts' eye movement path. The model will then be implemented in a training method for unskilled choral singers. Finally, as a summative result, we plan to evaluate how the training affects novices' competency in sight-reading and comprehension of the score.
音乐视读是一项复杂的任务,需要熟练运用多种技能和知识。对于初学者来说,视读乐谱的能力通常被认为是最具挑战性的目标之一,因此,寻找帮助他们学习的方法是音乐教育研究人员的一项重要任务。本研究的目的是为了更深入地了解眼动追踪技术如何应用于合唱团歌手的视读能力。收集到的初学者在合唱排练期间的视读模式数据,有助于发现歌手面临的问题。分析专家歌手表演的相应模式,可以提供有价值的信息,帮助人们了解随着经验的增加而发展起来的有用策略。该项目有望生成一个近似模型,类似于专家的眼球运动路径。然后,该模型将用于对不熟练的合唱歌手进行培训。最后,作为总结性结果,我们计划评估训练如何影响新手的视读能力和对分数的理解。
{"title":"Seeing into the music score: eye-tracking and sight-reading in a choral context","authors":"M. Timoshenko","doi":"10.1145/3204493.3207415","DOIUrl":"https://doi.org/10.1145/3204493.3207415","url":null,"abstract":"Musical sight-reading is a complex task which requires fluent use of multiple types of skills and knowledge. The ability to sight-read a score is typically described as one of the most challenging aims for beginners and finding ways of scaffolding their learning is, therefore, an important task for researchers in music education. The purpose of this study is to provide a deeper understanding of how an application of eye tracking technology can be utilized to improve choir singers' sight-reading ability. Collected data of novices' sight-reading patterns during choral rehearsal have helped identify problems that singers are facing. Analyzing corresponding patterns in sight-reading performed by expert singers may provide valuable information about helpful strategies developed with increasing experience. This project is expected to generate an approximate model, similar to the experts' eye movement path. The model will then be implemented in a training method for unskilled choral singers. Finally, as a summative result, we plan to evaluate how the training affects novices' competency in sight-reading and comprehension of the score.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121761566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Gaze patterns during remote presentations while listening and speaking 听和说远程演示时的凝视模式
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204540
Pernilla Qvarfordt, Matthew L. Lee
Managing an audience's visual attention to presentation content is critical for effective communication in tele-conferences. This paper explores how audience and presenter coordinate visual and verbal information, and how consistent their gaze behavior is, to understand if their gaze behavior can be used for inferring and communicating attention in remote presentations. In a lab study, participants were asked first to view a short video presentation, and next, to rehearse and present to a remote viewer using the slides from the video presentation. We found that presenters coordinate their speech and gaze at visual regions of the slides in a timely manner (in 72% of all events analyzed), whereas audience only looked at what the presenter talked about in 53% of all events. Rehearsing aloud and presenting resulted in similar scanpaths. To further explore if it possible to infer if what a presenter is looking at is also talked about, we successfully trained models to detect an attention match between gaze and speech. These findings suggest that using the presenter's gaze has the potential to reliably communicate the presenter's focus on essential parts of the visual presentation material to help the audience better follow the presenter.
管理观众对演示内容的视觉注意力对于远程会议的有效沟通至关重要。本文探讨了观众和演示者如何协调视觉和语言信息,以及他们的凝视行为是如何一致的,以了解他们的凝视行为是否可以用于远程演示中的注意力推断和交流。在一项实验室研究中,参与者被要求首先观看一段简短的视频演示,然后排练并使用视频演示中的幻灯片向远程观看者展示。我们发现,在所有分析的事件中,演讲者及时协调他们的演讲并注视幻灯片的视觉区域(在72%的事件中),而在所有事件中,观众只关注演讲者所谈论的内容,占53%。大声排练和演示导致了相似的扫描路径。为了进一步探索是否有可能推断出演讲者正在看的东西是否也被谈论,我们成功地训练了模型来检测凝视和说话之间的注意力匹配。这些发现表明,使用演示者的目光有可能可靠地传达演示者对视觉演示材料重要部分的关注,以帮助观众更好地跟随演示者。
{"title":"Gaze patterns during remote presentations while listening and speaking","authors":"Pernilla Qvarfordt, Matthew L. Lee","doi":"10.1145/3204493.3204540","DOIUrl":"https://doi.org/10.1145/3204493.3204540","url":null,"abstract":"Managing an audience's visual attention to presentation content is critical for effective communication in tele-conferences. This paper explores how audience and presenter coordinate visual and verbal information, and how consistent their gaze behavior is, to understand if their gaze behavior can be used for inferring and communicating attention in remote presentations. In a lab study, participants were asked first to view a short video presentation, and next, to rehearse and present to a remote viewer using the slides from the video presentation. We found that presenters coordinate their speech and gaze at visual regions of the slides in a timely manner (in 72% of all events analyzed), whereas audience only looked at what the presenter talked about in 53% of all events. Rehearsing aloud and presenting resulted in similar scanpaths. To further explore if it possible to infer if what a presenter is looking at is also talked about, we successfully trained models to detect an attention match between gaze and speech. These findings suggest that using the presenter's gaze has the potential to reliably communicate the presenter's focus on essential parts of the visual presentation material to help the audience better follow the presenter.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114367748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1