首页 > 最新文献

Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications最新文献

英文 中文
Seeing in time: an investigation of entrainment and visual processing in toddlers 及时观察:幼儿娱乐和视觉加工的研究
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3207418
Hsing-fen Tu
Recent neurophysiological and behavioral studies have provided strong evidence of rhythmic entrainment in the perceptual level in adults. The present study examines if rhythmic auditory stimulation synchronized with visual stimuli and fast tempi could enhance the visual processing in toddlers. Two groups of participants with different musical experiences are recruited. A head-mounted camera will be used to investigate perceptual entrainment when participants perform visual search tasks.
最近的神经生理学和行为学研究已经提供了强有力的证据,证明节奏性夹带在成人的知觉水平。本研究考察了节奏性听觉刺激与视觉刺激的同步以及快节奏是否能增强幼儿的视觉加工。招募了两组具有不同音乐经验的参与者。当参与者执行视觉搜索任务时,将使用头戴式摄像机来调查感知夹带。
{"title":"Seeing in time: an investigation of entrainment and visual processing in toddlers","authors":"Hsing-fen Tu","doi":"10.1145/3204493.3207418","DOIUrl":"https://doi.org/10.1145/3204493.3207418","url":null,"abstract":"Recent neurophysiological and behavioral studies have provided strong evidence of rhythmic entrainment in the perceptual level in adults. The present study examines if rhythmic auditory stimulation synchronized with visual stimuli and fast tempi could enhance the visual processing in toddlers. Two groups of participants with different musical experiences are recruited. A head-mounted camera will be used to investigate perceptual entrainment when participants perform visual search tasks.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133809329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fixation-indices based correlation between text and image visual features of webpages 基于注视指数的网页文本与图像视觉特征的相关性研究
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204566
Sandeep Vidyapu, V. Saradhi, S. Bhattacharya
Web elements associate with a set of visual features based on their data modality. For example, text associated with font-size and font-family whereas images associate with intensity and color. The unavailability of methods to relate these heterogeneous visual features limiting the attention-based analyses on webpages. In this paper, we propose a novel approach to establish the correlation between text and image visual features that influence users' attention. We pair the visual features of text and images based on their associated fixation-indices obtained from eye-tracking. From paired data, a common subspace is learned using Canonical Correlation Analysis (CCA) to maximize the correlation between them. The performance of the proposed approach is analyzed through a controlled eye-tracking experiment conducted on 51 real-world webpages. A very high correlation of 99.48% is achieved between text and images with text related font families and image related color features influencing the correlation.
Web元素根据其数据模式与一组视觉特性相关联。例如,文本与字体大小和字体族相关,而图像与强度和颜色相关。将这些异质视觉特征联系起来的方法的缺乏限制了基于注意力的网页分析。在本文中,我们提出了一种新的方法来建立影响用户注意力的文本和图像视觉特征之间的相关性。我们根据从眼动追踪中获得的相关注视指数对文本和图像的视觉特征进行配对。利用典型相关分析(Canonical Correlation Analysis, CCA)从成对数据中学习公共子空间,使它们之间的相关性最大化。通过在51个真实网页上进行的眼动跟踪实验,分析了该方法的性能。文本和图像之间的相关性达到99.48%,其中与文本相关的字体族和与图像相关的颜色特征会影响相关性。
{"title":"Fixation-indices based correlation between text and image visual features of webpages","authors":"Sandeep Vidyapu, V. Saradhi, S. Bhattacharya","doi":"10.1145/3204493.3204566","DOIUrl":"https://doi.org/10.1145/3204493.3204566","url":null,"abstract":"Web elements associate with a set of visual features based on their data modality. For example, text associated with font-size and font-family whereas images associate with intensity and color. The unavailability of methods to relate these heterogeneous visual features limiting the attention-based analyses on webpages. In this paper, we propose a novel approach to establish the correlation between text and image visual features that influence users' attention. We pair the visual features of text and images based on their associated fixation-indices obtained from eye-tracking. From paired data, a common subspace is learned using Canonical Correlation Analysis (CCA) to maximize the correlation between them. The performance of the proposed approach is analyzed through a controlled eye-tracking experiment conducted on 51 real-world webpages. A very high correlation of 99.48% is achieved between text and images with text related font families and image related color features influencing the correlation.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116131841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Scanpath comparison in medical image reading skills of dental students: distinguishing stages of expertise development 牙科学生医学影像阅读技能的扫描路径比较:区分专业发展阶段
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204550
Nora Castner, Enkelejda Kasneci, Thomas C. Kübler, K. Scheiter, Juliane Richter, Thérése F. Eder, F. Hüttig, C. Keutel
A popular topic in eye tracking is the difference between novices and experts and their domain-specific eye movement behaviors. However, very little is researched regarding how expertise develops, and more specifically, the developmental stages of eye movement behaviors. Our work compares the scanpaths of five semesters of dental students viewing orthopantomograms (OPTs) with classifiers to distinguish sixth semester through tenth semester students. We used the analysis algorithm SubsMatch 2.0 and the Needleman-Wunsch algorithm. Overall, both classifiers were able distinguish the stages of expertise in medical image reading above chance level. Specifically, it was able to accurately determine sixth semester students with no prior training as well as sixth semester students after training. Ultimately, using scanpath models to recognize gaze patterns characteristic of learning stages, we can provide more adaptive, gaze-based training for students.
在眼动追踪中,一个热门的话题是新手和专家之间的差异以及他们在特定领域的眼动行为。然而,关于专业技能如何发展的研究很少,更具体地说,关于眼动行为的发展阶段的研究很少。我们的工作比较了五个学期牙科学生观看骨科断层扫描(opt)的扫描路径与分类器,以区分第六学期到第十学期的学生。我们使用了分析算法SubsMatch 2.0和Needleman-Wunsch算法。总体而言,两种分类器都能区分医学图像阅读的专业阶段。具体来说,它能够准确地判断出没有事先训练的六学期学生和经过训练的六学期学生。最终,使用扫描路径模型来识别学习阶段的凝视模式特征,我们可以为学生提供更具适应性的、基于凝视的训练。
{"title":"Scanpath comparison in medical image reading skills of dental students: distinguishing stages of expertise development","authors":"Nora Castner, Enkelejda Kasneci, Thomas C. Kübler, K. Scheiter, Juliane Richter, Thérése F. Eder, F. Hüttig, C. Keutel","doi":"10.1145/3204493.3204550","DOIUrl":"https://doi.org/10.1145/3204493.3204550","url":null,"abstract":"A popular topic in eye tracking is the difference between novices and experts and their domain-specific eye movement behaviors. However, very little is researched regarding how expertise develops, and more specifically, the developmental stages of eye movement behaviors. Our work compares the scanpaths of five semesters of dental students viewing orthopantomograms (OPTs) with classifiers to distinguish sixth semester through tenth semester students. We used the analysis algorithm SubsMatch 2.0 and the Needleman-Wunsch algorithm. Overall, both classifiers were able distinguish the stages of expertise in medical image reading above chance level. Specifically, it was able to accurately determine sixth semester students with no prior training as well as sixth semester students after training. Ultimately, using scanpath models to recognize gaze patterns characteristic of learning stages, we can provide more adaptive, gaze-based training for students.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123237911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Using eye tracking to simplify screening for visual field defects and improve vision rehabilitation: extended abstract 应用眼动追踪简化视野缺陷筛查和改善视力康复:扩展摘要
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3207414
Birte Gestefeld, A. Grillini, J. Marsman, F. Cornelissen
My thesis will encompass two main research objectives: (1) Evaluation of eye tracking as a method to screen for visual field defects. (2) Investigating how vision rehabilitation therapy can be improved by employing eye tracking.
我的论文将包括两个主要的研究目标:(1)评价眼动追踪作为一种筛选视野缺陷的方法。(2)探讨眼动追踪对视力康复治疗的改善作用。
{"title":"Using eye tracking to simplify screening for visual field defects and improve vision rehabilitation: extended abstract","authors":"Birte Gestefeld, A. Grillini, J. Marsman, F. Cornelissen","doi":"10.1145/3204493.3207414","DOIUrl":"https://doi.org/10.1145/3204493.3207414","url":null,"abstract":"My thesis will encompass two main research objectives: (1) Evaluation of eye tracking as a method to screen for visual field defects. (2) Investigating how vision rehabilitation therapy can be improved by employing eye tracking.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124674730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting the gaze depth in head-mounted displays using multiple feature regression 基于多特征回归的头戴式显示器凝视深度预测
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204547
Martin Weier, T. Roth, André Hinkenjann, P. Slusallek
Head-mounted displays (HMDs) with integrated eye trackers have opened up a new realm for gaze-contingent rendering. The accurate estimation of gaze depth is essential when modeling the optical capabilities of the eye. Most recently multifocal displays are gaining importance, requiring focus estimates to control displays or lenses. Deriving the gaze depth solely by sampling the scene's depth at the point-of-regard fails for complex or thin objects as eye tracking is suffering from inaccuracies. Gaze depth measures using the eye's vergence only provide an accurate depth estimate for the first meter. In this work, we combine vergence measures and multiple depth measures into feature sets. This data is used to train a regression model to deliver improved estimates. We present a study showing that using multiple features allows for an accurate estimation of the focused depth (MSE<0.1m) over a wide range (first 6m).
头戴式显示器(hmd)与集成眼动追踪器打开了一个新的领域,为凝视视景渲染。在模拟眼睛的光学能力时,准确估计凝视深度是必不可少的。最近,多焦点显示器变得越来越重要,需要焦距估计来控制显示器或镜头。仅仅通过在注视点采样景物的深度来获得凝视深度,对于复杂或薄的物体来说是失败的,因为眼动追踪存在不准确性。凝视深度测量使用眼睛的辐辏只提供一个准确的深度估计的第一米。在这项工作中,我们将收敛度量和多个深度度量结合到特征集中。该数据用于训练回归模型,以提供改进的估计。我们提出的一项研究表明,使用多个特征可以在宽范围(前6m)内准确估计聚焦深度(MSE<0.1m)。
{"title":"Predicting the gaze depth in head-mounted displays using multiple feature regression","authors":"Martin Weier, T. Roth, André Hinkenjann, P. Slusallek","doi":"10.1145/3204493.3204547","DOIUrl":"https://doi.org/10.1145/3204493.3204547","url":null,"abstract":"Head-mounted displays (HMDs) with integrated eye trackers have opened up a new realm for gaze-contingent rendering. The accurate estimation of gaze depth is essential when modeling the optical capabilities of the eye. Most recently multifocal displays are gaining importance, requiring focus estimates to control displays or lenses. Deriving the gaze depth solely by sampling the scene's depth at the point-of-regard fails for complex or thin objects as eye tracking is suffering from inaccuracies. Gaze depth measures using the eye's vergence only provide an accurate depth estimate for the first meter. In this work, we combine vergence measures and multiple depth measures into feature sets. This data is used to train a regression model to deliver improved estimates. We present a study showing that using multiple features allows for an accurate estimation of the focused depth (MSE<0.1m) over a wide range (first 6m).","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121684622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Microsaccade detection using pupil and corneal reflection signals 利用瞳孔和角膜反射信号进行微扫视检测
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204573
D. Niehorster, M. Nyström
In contemporary research, microsaccade detection is typically performed using the calibrated gaze-velocity signal acquired from a video-based eye tracker. To generate this signal, the pupil and corneal reflection (CR) signals are subtracted from each other and a differentiation filter is applied, both of which may prevent small microsaccades from being detected due to signal distortion and noise amplification. We propose a new algorithm where microsaccades are detected directly from uncalibrated pupil-, and CR signals. It is based on detrending followed by windowed correlation between pupil and CR signals. The proposed algorithm outperforms the most commonly used algorithm in the field (Engbert & Kliegl, 2003), in particular for small amplitude microsaccades that are difficult to see in the velocity signal even with the naked eye. We argue that it is advantageous to consider the most basic output of the eye tracker, i.e. pupil-, and CR signals, when detecting small microsaccades.
在当代研究中,通常使用基于视频的眼动仪获得的校准的注视速度信号来进行微动检测。为了产生该信号,瞳孔和角膜反射(CR)信号相互相减,并应用微分滤波器,两者都可以防止由于信号失真和噪声放大而检测到小的微跳。我们提出了一种新的算法,该算法直接从未校准的瞳孔和CR信号中检测微眼跳。它基于瞳孔和CR信号之间的去趋势,然后是窗相关。该算法优于该领域最常用的算法(Engbert & Kliegl, 2003),特别是对于即使用肉眼也难以在速度信号中看到的小幅度微跳。我们认为,在检测微小的微跳时,考虑眼动仪最基本的输出,即瞳孔和CR信号是有利的。
{"title":"Microsaccade detection using pupil and corneal reflection signals","authors":"D. Niehorster, M. Nyström","doi":"10.1145/3204493.3204573","DOIUrl":"https://doi.org/10.1145/3204493.3204573","url":null,"abstract":"In contemporary research, microsaccade detection is typically performed using the calibrated gaze-velocity signal acquired from a video-based eye tracker. To generate this signal, the pupil and corneal reflection (CR) signals are subtracted from each other and a differentiation filter is applied, both of which may prevent small microsaccades from being detected due to signal distortion and noise amplification. We propose a new algorithm where microsaccades are detected directly from uncalibrated pupil-, and CR signals. It is based on detrending followed by windowed correlation between pupil and CR signals. The proposed algorithm outperforms the most commonly used algorithm in the field (Engbert & Kliegl, 2003), in particular for small amplitude microsaccades that are difficult to see in the velocity signal even with the naked eye. We argue that it is advantageous to consider the most basic output of the eye tracker, i.e. pupil-, and CR signals, when detecting small microsaccades.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131125010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Self-made mobile gaze tracking for group studies 自制移动注视跟踪小组研究
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3208347
M. Toivanen, V. Salonen, M. Hannula
Mobile gaze tracking does not need to be expensive. We have a self-made mobile gaze tracking system, consisting of glasses-like frame and a software which computes the gaze point. As the total cost of the equipment is less than thousand euros, we have prepared five devices which we use in group studies, to simultaneously estimate multiple students' gaze in classroom. The inexpensive mobile gaze tracking technology opens new possibilities for studying the attentional processes on a group level.
移动凝视跟踪并不需要很昂贵。我们自制了一种移动凝视跟踪系统,它由类似眼镜的框架和计算凝视点的软件组成。由于设备的总成本不到千欧元,我们准备了五个设备,用于小组学习,同时估计多个学生在课堂上的目光。廉价的移动注视跟踪技术为研究群体的注意力过程提供了新的可能性。
{"title":"Self-made mobile gaze tracking for group studies","authors":"M. Toivanen, V. Salonen, M. Hannula","doi":"10.1145/3204493.3208347","DOIUrl":"https://doi.org/10.1145/3204493.3208347","url":null,"abstract":"Mobile gaze tracking does not need to be expensive. We have a self-made mobile gaze tracking system, consisting of glasses-like frame and a software which computes the gaze point. As the total cost of the equipment is less than thousand euros, we have prepared five devices which we use in group studies, to simultaneously estimate multiple students' gaze in classroom. The inexpensive mobile gaze tracking technology opens new possibilities for studying the attentional processes on a group level.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121225620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Audio-visual interaction in emotion perception for communication: doctoral symposium, extended abstract 情感感知在交流中的视听互动:博士研讨会,扩展摘要
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3207422
M. D. Boer, D. Başkent, F. Cornelissen
Information from multiple modalities contributes to recognizing emotions. While it is known interactions occur between modalities, it is unclear what characterizes these. These interactions, and changes in these interactions due to sensory impairments, are the main subject of this PhD project. This extended abstract for the Doctoral Symposium of ETRA 2018 describes the project; its background, what I hope to achieve, and some preliminary results.
来自多种模式的信息有助于识别情绪。虽然已知模式之间会发生相互作用,但尚不清楚这些模式的特征。这些相互作用,以及这些相互作用因感觉障碍而发生的变化,是本博士项目的主要课题。2018年ETRA博士研讨会的扩展摘要描述了该项目;它的背景,我希望达到的目标,以及一些初步的结果。
{"title":"Audio-visual interaction in emotion perception for communication: doctoral symposium, extended abstract","authors":"M. D. Boer, D. Başkent, F. Cornelissen","doi":"10.1145/3204493.3207422","DOIUrl":"https://doi.org/10.1145/3204493.3207422","url":null,"abstract":"Information from multiple modalities contributes to recognizing emotions. While it is known interactions occur between modalities, it is unclear what characterizes these. These interactions, and changes in these interactions due to sensory impairments, are the main subject of this PhD project. This extended abstract for the Doctoral Symposium of ETRA 2018 describes the project; its background, what I hope to achieve, and some preliminary results.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126114082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Smooth-i: smart re-calibration using smooth pursuit eye movements smooth -i:使用平滑的眼球运动进行智能重新校准
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204585
Argenis Ramirez Gomez, Hans-Werner Gellersen
Eye gaze for interaction is dependent on calibration. However, gaze calibration can deteriorate over time affecting the usability of the system. We propose to use motion matching of smooth pursuit eye movements and known motion on the display to determine when there is a drift in accuracy and use it as input for re-calibration. To explore this idea we developed Smooth-i, an algorithm that stores calibration points and updates them incrementally when inaccuracies are identified. To validate the accuracy of Smooth-i, we conducted a study with five participants and a remote eye tracker. A baseline calibration profile was used by all participants to test the accuracy of the Smooth-i re-calibration following interaction with moving targets. Results show that Smooth-i is able to manage re-calibration efficiently, updating the calibration profile only when inaccurate data samples are detected.
相互作用的眼睛注视依赖于校准。然而,注视校准会随着时间的推移而恶化,从而影响系统的可用性。我们建议使用平滑追踪眼球运动与显示器上已知运动的运动匹配来确定精度何时出现漂移,并将其作为重新校准的输入。为了探索这个想法,我们开发了Smooth-i,这是一种存储校准点并在识别不准确时逐步更新它们的算法。为了验证Smooth-i的准确性,我们对五名参与者和一个远程眼动仪进行了研究。所有参与者使用基线校准配置文件来测试与移动目标相互作用后Smooth-i重新校准的准确性。结果表明,Smooth-i能够有效地管理重新校准,仅在检测到不准确的数据样本时更新校准轮廓。
{"title":"Smooth-i: smart re-calibration using smooth pursuit eye movements","authors":"Argenis Ramirez Gomez, Hans-Werner Gellersen","doi":"10.1145/3204493.3204585","DOIUrl":"https://doi.org/10.1145/3204493.3204585","url":null,"abstract":"Eye gaze for interaction is dependent on calibration. However, gaze calibration can deteriorate over time affecting the usability of the system. We propose to use motion matching of smooth pursuit eye movements and known motion on the display to determine when there is a drift in accuracy and use it as input for re-calibration. To explore this idea we developed Smooth-i, an algorithm that stores calibration points and updates them incrementally when inaccuracies are identified. To validate the accuracy of Smooth-i, we conducted a study with five participants and a remote eye tracker. A baseline calibration profile was used by all participants to test the accuracy of the Smooth-i re-calibration following interaction with moving targets. Results show that Smooth-i is able to manage re-calibration efficiently, updating the calibration profile only when inaccurate data samples are detected.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133488724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A system to determine if learners know the divisibility rules and apply them correctly 一个系统,以确定学习者是否知道可除规则,并正确地应用它们
Pub Date : 2018-06-14 DOI: 10.1145/3204493.3204526
P. Potgieter, P. Blignaut
Mathematics teachers may find it challenging to manage the learning that takes place in learners' minds. Typical true/false or multiple choice assessments, whether in oral, written or electronic format, do not provide evidence that learners applied the correct principles. A system was developed to analyse learners' gaze behaviour while they were determining whether a multi-digit dividend is divisible by a divisor. The system provides facilities for a teacher to set up tests and generate various types of quantitative and qualitative reports. The system was tested with a group of 16 learners from Grade 7 to Grade 10 in a pre-post experiment to investigate the effect of revision on their performance. It was proven that, with tests that are carefully compiled according to a set of heuristics, eye tracking can be used to determine whether learners use the correct strategy when applying divisibility rules.
数学教师可能会发现,管理发生在学习者头脑中的学习是一项挑战。典型的真假或选择题评估,无论是口头、书面还是电子形式,都不能提供证据证明学习者应用了正确的原则。研究人员开发了一个系统来分析学习者在判断一个多位数的除数是否能被一个除数整除时的注视行为。该系统为教师设置测试和生成各种类型的定量和定性报告提供了便利。以16名七年级至十年级的学生为研究对象,通过前后实验考察复习对学习成绩的影响。事实证明,通过根据一组启发式方法精心编写的测试,眼动追踪可以用来确定学习者在应用可除性规则时是否使用了正确的策略。
{"title":"A system to determine if learners know the divisibility rules and apply them correctly","authors":"P. Potgieter, P. Blignaut","doi":"10.1145/3204493.3204526","DOIUrl":"https://doi.org/10.1145/3204493.3204526","url":null,"abstract":"Mathematics teachers may find it challenging to manage the learning that takes place in learners' minds. Typical true/false or multiple choice assessments, whether in oral, written or electronic format, do not provide evidence that learners applied the correct principles. A system was developed to analyse learners' gaze behaviour while they were determining whether a multi-digit dividend is divisible by a divisor. The system provides facilities for a teacher to set up tests and generate various types of quantitative and qualitative reports. The system was tested with a group of 16 learners from Grade 7 to Grade 10 in a pre-post experiment to investigate the effect of revision on their performance. It was proven that, with tests that are carefully compiled according to a set of heuristics, eye tracking can be used to determine whether learners use the correct strategy when applying divisibility rules.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132324106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1