首页 > 最新文献

2022 Symposium on Eye Tracking Research and Applications最新文献

英文 中文
SynchronEyes: A Novel, Paired Data Set of Eye Movements Recorded Simultaneously with Remote and Wearable Eye-Tracking Devices 同步眼:一种新颖的、用远程和可穿戴眼动追踪设备同时记录的眼动配对数据集
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3532522
Samantha Aziz, D. Lohr, Oleg V. Komogortsev
Comparing the performance of new eye-tracking devices against an established benchmark is vital for identifying differences in the way eye movements are reported by each device. This paper introduces a new paired data set comprised of eye movement recordings captured simultaneously with both the EyeLink 1000—considered the “gold standard” in eye-tracking research studies—and the recently released AdHawk MindLink eye tracker. Our work presents a methodology for simultaneous data collection and a comparison of the resulting eye-tracking signal quality achieved by each device.
将新的眼球追踪设备的性能与既定基准进行比较,对于识别每个设备报告眼球运动的方式的差异至关重要。本文介绍了一个新的配对数据集,该数据集由眼部运动记录同时被EyeLink 1000(眼动追踪研究中的“黄金标准”)和最近发布的AdHawk MindLink眼动追踪器捕获。我们的工作提出了一种同时收集数据的方法,并比较了每个设备所获得的眼动追踪信号质量。
{"title":"SynchronEyes: A Novel, Paired Data Set of Eye Movements Recorded Simultaneously with Remote and Wearable Eye-Tracking Devices","authors":"Samantha Aziz, D. Lohr, Oleg V. Komogortsev","doi":"10.1145/3517031.3532522","DOIUrl":"https://doi.org/10.1145/3517031.3532522","url":null,"abstract":"Comparing the performance of new eye-tracking devices against an established benchmark is vital for identifying differences in the way eye movements are reported by each device. This paper introduces a new paired data set comprised of eye movement recordings captured simultaneously with both the EyeLink 1000—considered the “gold standard” in eye-tracking research studies—and the recently released AdHawk MindLink eye tracker. Our work presents a methodology for simultaneous data collection and a comparison of the resulting eye-tracking signal quality achieved by each device.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"100 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114085599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Estimating Perceptual Depth Changes with Eye Vergence and Interpupillary Distance using an Eye Tracker in Virtual Reality 基于眼动仪的虚拟现实视觉深度随眼球辐辏和瞳孔间距变化的估计
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529632
M. S. Arefin, J. Swan, R. C. Hoffing, Steven M. Thurman
Virtual Reality (VR) technology has advanced to include eye-tracking, allowing novel research, such as investigating how our visual system coordinates eye movements with changes in perceptual depth. The purpose of this study was to examine whether eye tracking could track perceptual depth changes during a visual discrimination task. We derived two depth-dependent variables from eye tracker data: eye vergence angle (EVA) and interpupillary distance (IPD). As hypothesized, our results revealed that shifting gaze from near-to-far depth significantly decreased EVA and increased IPD, while the opposite pattern was observed while shifting from far-to-near. Importantly, the amount of change in these variables tracked closely with relative changes in perceptual depth, and supported the hypothesis that eye tracker data may be used to infer real-time changes in perceptual depth in VR. Our method could be used as a new tool to adaptively render information based on depth and improve the VR user experience.
虚拟现实(VR)技术已经发展到包括眼球追踪,允许进行新的研究,例如研究我们的视觉系统如何随着感知深度的变化协调眼球运动。本研究的目的是检验眼动追踪是否可以追踪视觉辨别任务中感知深度的变化。我们从眼动仪数据中导出了两个深度相关变量:眼辐角(EVA)和瞳孔间距(IPD)。正如假设的那样,我们的研究结果显示,从近到远的深度转移视线显著降低EVA和增加IPD,而从远到近的深度转移则相反。重要的是,这些变量的变化量与感知深度的相对变化密切相关,并支持了眼动仪数据可用于推断VR中感知深度实时变化的假设。该方法可作为一种基于深度的自适应渲染信息的新工具,改善VR用户体验。
{"title":"Estimating Perceptual Depth Changes with Eye Vergence and Interpupillary Distance using an Eye Tracker in Virtual Reality","authors":"M. S. Arefin, J. Swan, R. C. Hoffing, Steven M. Thurman","doi":"10.1145/3517031.3529632","DOIUrl":"https://doi.org/10.1145/3517031.3529632","url":null,"abstract":"Virtual Reality (VR) technology has advanced to include eye-tracking, allowing novel research, such as investigating how our visual system coordinates eye movements with changes in perceptual depth. The purpose of this study was to examine whether eye tracking could track perceptual depth changes during a visual discrimination task. We derived two depth-dependent variables from eye tracker data: eye vergence angle (EVA) and interpupillary distance (IPD). As hypothesized, our results revealed that shifting gaze from near-to-far depth significantly decreased EVA and increased IPD, while the opposite pattern was observed while shifting from far-to-near. Importantly, the amount of change in these variables tracked closely with relative changes in perceptual depth, and supported the hypothesis that eye tracker data may be used to infer real-time changes in perceptual depth in VR. Our method could be used as a new tool to adaptively render information based on depth and improve the VR user experience.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114783084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Measuring Cognitive Effort with Pupillary Activity and Fixational Eye Movements When Reading: Longitudinal Comparison of Children With and Without Primary Music Education 用瞳孔活动和眼球运动测量阅读时的认知努力:接受和未接受初级音乐教育儿童的纵向比较
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529636
Agata Rodziewicz-Cybulska, Krzysztof Krejtz, A. Duchowski, I. Krejtz
This article evaluates the Low/High Index of Pupillary Activity (LHIPA), a measure of cognitive effort based on pupil response, in the context of reading. At the beginning of 2nd and 3rd grade, 107 children (8-9 y.o.) from music and general primary school were asked to read 40 sentences with keywords differing in length and frequency while their eye movements were recorded. Sentences with low frequency or long keywords received more attention than sentences with high frequent or short keywords. The word frequency and length effects were more pronounced in younger children. At the 2nd grade, music children dwelt less on sentences with short frequent keywords than on sentences with long frequent keywords. As expected LHIPA decreased over sentences with low frequency short keywords suggesting more cognitive effort at earlier stages of reading ability. This finding shows the utility of LHIPA as a measure of cognitive effort in education.
本文评估了瞳孔活动低/高指数(LHIPA),一种基于瞳孔反应的认知努力测量,在阅读的背景下。在二年级和三年级开始时,来自音乐和普通小学的107名儿童(8-9岁)被要求阅读40个长度和频率不同的关键词句子,同时记录他们的眼球运动。低频率或长关键词的句子比高频率或短关键词的句子更受关注。单词频率和长度的影响在年龄较小的儿童中更为明显。在二年级时,音乐儿童较少关注短关键词的句子,而较少关注长关键词的句子。正如预期的那样,LHIPA在低频率短关键词的句子中下降,表明在阅读能力的早期阶段需要更多的认知努力。这一发现显示了LHIPA作为教育中认知努力的衡量标准的效用。
{"title":"Measuring Cognitive Effort with Pupillary Activity and Fixational Eye Movements When Reading: Longitudinal Comparison of Children With and Without Primary Music Education","authors":"Agata Rodziewicz-Cybulska, Krzysztof Krejtz, A. Duchowski, I. Krejtz","doi":"10.1145/3517031.3529636","DOIUrl":"https://doi.org/10.1145/3517031.3529636","url":null,"abstract":"This article evaluates the Low/High Index of Pupillary Activity (LHIPA), a measure of cognitive effort based on pupil response, in the context of reading. At the beginning of 2nd and 3rd grade, 107 children (8-9 y.o.) from music and general primary school were asked to read 40 sentences with keywords differing in length and frequency while their eye movements were recorded. Sentences with low frequency or long keywords received more attention than sentences with high frequent or short keywords. The word frequency and length effects were more pronounced in younger children. At the 2nd grade, music children dwelt less on sentences with short frequent keywords than on sentences with long frequent keywords. As expected LHIPA decreased over sentences with low frequency short keywords suggesting more cognitive effort at earlier stages of reading ability. This finding shows the utility of LHIPA as a measure of cognitive effort in education.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130662940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
EyeLikert: Eye-based Interactions for Answering Surveys EyeLikert:回答调查的基于眼睛的互动
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529776
Moritz Langner, N. Aßfalg, Peyman Toreini, A. Maedche
Surveys are a widely used method for data collection from participants. However, responding to surveys is a time consuming task and requires cognitive and physical efforts of the participants. Eye-based interactions offer the advantage of high speed pointing, low physical effort and implicitness. These advantages are already successfully leveraged in different domains, but so far not investigated in supporting participants in responding to surveys. In this paper, we present EyeLikert, a tool that enables users to answer Likert-scale questions in surveys with their eyes. EyeLikert integrates three different eye-based interactions considering the Midas Touch problem. We hypothesize that enabling eye-based interactions to fill out surveys offers the potential to reduce the physical effort, increase the speed of responding questions, and thereby reduce drop-out rates.
调查是从参与者那里收集数据的一种广泛使用的方法。然而,回答调查是一项耗时的任务,需要参与者的认知和体力努力。基于眼睛的交互提供了快速指向、低体力消耗和隐含性的优势。这些优势已经成功地应用于不同的领域,但到目前为止还没有在支持参与者回应调查方面进行调查。在本文中,我们介绍了EyeLikert,这是一个使用户能够用眼睛回答调查中李克特量表问题的工具。考虑到点石成金的问题,EyeLikert集成了三种不同的基于眼睛的交互。我们假设,让基于眼睛的互动来填写调查问卷,有可能减少体力劳动,提高回答问题的速度,从而降低辍学率。
{"title":"EyeLikert: Eye-based Interactions for Answering Surveys","authors":"Moritz Langner, N. Aßfalg, Peyman Toreini, A. Maedche","doi":"10.1145/3517031.3529776","DOIUrl":"https://doi.org/10.1145/3517031.3529776","url":null,"abstract":"Surveys are a widely used method for data collection from participants. However, responding to surveys is a time consuming task and requires cognitive and physical efforts of the participants. Eye-based interactions offer the advantage of high speed pointing, low physical effort and implicitness. These advantages are already successfully leveraged in different domains, but so far not investigated in supporting participants in responding to surveys. In this paper, we present EyeLikert, a tool that enables users to answer Likert-scale questions in surveys with their eyes. EyeLikert integrates three different eye-based interactions considering the Midas Touch problem. We hypothesize that enabling eye-based interactions to fill out surveys offers the potential to reduce the physical effort, increase the speed of responding questions, and thereby reduce drop-out rates.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128897466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking 考虑头部的动作!移动眼动追踪中的扫视计算
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529624
Negar Alinaghi, Ioannis Giannopoulos
Saccadic eye movements are known to serve as a suitable proxy for tasks prediction. In mobile eye-tracking, saccadic events are strongly influenced by head movements. Common attempts to compensate for head-movement effects either neglect saccadic events altogether or fuse gaze and head-movement signals measured by IMUs in order to simulate the gaze signal at head-level. Using image processing techniques, we propose a solution for computing saccades based on frames of the scene-camera video. In this method, fixations are first detected based on gaze positions specified in the coordinate system of each frame, and then respective frames are merged. Lastly, pairs of consecutive fixations –forming a saccade- are projected into the coordinate system of the stitched image using the homography matrices computed by the stitching algorithm. The results show a significant difference in length between projected and original saccades, and approximately 37% of error introduced by employing saccades without head-movement consideration.
跳跃性眼球运动被认为是任务预测的合适代理。在移动眼动追踪中,跳眼事件受到头部运动的强烈影响。补偿头部运动效应的常见尝试要么完全忽略跳眼事件,要么融合由imu测量的凝视和头部运动信号,以模拟头部水平的凝视信号。利用图像处理技术,提出了一种基于场景摄像机视频帧的视跳计算方案。该方法首先根据每帧坐标系中指定的注视位置检测注视点,然后对各帧进行合并。最后,利用拼接算法计算出的单应性矩阵,将连续注视对(形成一个扫视)投影到拼接图像的坐标系中。结果表明,投影的扫视和原始的扫视在长度上有显著的差异,在不考虑头部运动的情况下使用扫视会产生大约37%的误差。
{"title":"Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking","authors":"Negar Alinaghi, Ioannis Giannopoulos","doi":"10.1145/3517031.3529624","DOIUrl":"https://doi.org/10.1145/3517031.3529624","url":null,"abstract":"Saccadic eye movements are known to serve as a suitable proxy for tasks prediction. In mobile eye-tracking, saccadic events are strongly influenced by head movements. Common attempts to compensate for head-movement effects either neglect saccadic events altogether or fuse gaze and head-movement signals measured by IMUs in order to simulate the gaze signal at head-level. Using image processing techniques, we propose a solution for computing saccades based on frames of the scene-camera video. In this method, fixations are first detected based on gaze positions specified in the coordinate system of each frame, and then respective frames are merged. Lastly, pairs of consecutive fixations –forming a saccade- are projected into the coordinate system of the stitched image using the homography matrices computed by the stitching algorithm. The results show a significant difference in length between projected and original saccades, and approximately 37% of error introduced by employing saccades without head-movement consideration.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132311142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi-User Eye-Tracking 多用户眼球追踪
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3532197
Bhanuka Mahanama
The human gaze characteristics provide informative cues on human behavior during various activities. Using traditional eye trackers, assessing gaze characteristics in the wild requires a dedicated device per participant and therefore is not feasible for large-scale experiments. In this study, we propose a commodity hardware-based multi-user eye-tracking system. We leverage the recent advancements in Deep Neural Networks and large-scale datasets for implementing our system. Our preliminary studies provide promising results for multi-user eye-tracking on commodity hardware, providing a cost-effective solution for large-scale studies.
人类的凝视特征为人类在各种活动中的行为提供了信息线索。使用传统的眼动仪,评估野外的凝视特征需要每个参与者一个专用的设备,因此不适合大规模实验。在本研究中,我们提出了一种基于商品硬件的多用户眼动追踪系统。我们利用深度神经网络和大规模数据集的最新进展来实现我们的系统。我们的初步研究为商用硬件上的多用户眼动追踪提供了有希望的结果,为大规模研究提供了一种经济有效的解决方案。
{"title":"Multi-User Eye-Tracking","authors":"Bhanuka Mahanama","doi":"10.1145/3517031.3532197","DOIUrl":"https://doi.org/10.1145/3517031.3532197","url":null,"abstract":"The human gaze characteristics provide informative cues on human behavior during various activities. Using traditional eye trackers, assessing gaze characteristics in the wild requires a dedicated device per participant and therefore is not feasible for large-scale experiments. In this study, we propose a commodity hardware-based multi-user eye-tracking system. We leverage the recent advancements in Deep Neural Networks and large-scale datasets for implementing our system. Our preliminary studies provide promising results for multi-user eye-tracking on commodity hardware, providing a cost-effective solution for large-scale studies.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129808919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Usability of the super-vowel for gaze-based text entry 基于注视的文本输入的超级元音的可用性
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529231
J. Matulewski, M. Patera
We tested experimentally the idea of reducing the number of buttons in the gaze-based text entry system by replacing all vowels with a single diamond character, which we call super-vowel. It is inspired by historical optimizations of the written language, like Abjar. This way, the number of items on the screen was reduced, simplifying text input and allowing to make the buttons larger. However, the modification can also be a distractor that increases the number of errors. As a result of an experiment on 29 people, it turned out that in the case of non-standard methods of entering text, the modification slightly increases the speed of entering the text and reduces the number of errors. However, this does not apply to the standard keyboard, a direct transformation of physical computer keyboards with a Qwerty button layout.
我们通过实验测试了减少基于注视的文本输入系统中按钮数量的想法,方法是用一个菱形字符替换所有的元音,我们称之为超级元音。它的灵感来自于对书面语言的历史优化,比如Abjar。这样,屏幕上的项目数量减少了,简化了文本输入,并允许将按钮变大。然而,修改也可能成为增加错误数量的干扰因素。经过对29人的实验,结果发现,在输入文本方法不规范的情况下,修改后的输入文本的速度略有提高,错误的数量也有所减少。但是,这并不适用于标准键盘,标准键盘是物理计算机键盘的直接转换,带有Qwerty按钮布局。
{"title":"Usability of the super-vowel for gaze-based text entry","authors":"J. Matulewski, M. Patera","doi":"10.1145/3517031.3529231","DOIUrl":"https://doi.org/10.1145/3517031.3529231","url":null,"abstract":"We tested experimentally the idea of reducing the number of buttons in the gaze-based text entry system by replacing all vowels with a single diamond character, which we call super-vowel. It is inspired by historical optimizations of the written language, like Abjar. This way, the number of items on the screen was reduced, simplifying text input and allowing to make the buttons larger. However, the modification can also be a distractor that increases the number of errors. As a result of an experiment on 29 people, it turned out that in the case of non-standard methods of entering text, the modification slightly increases the speed of entering the text and reduces the number of errors. However, this does not apply to the standard keyboard, a direct transformation of physical computer keyboards with a Qwerty button layout.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126614821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Advancing dignity for adaptive wheelchair users via a hybrid eye tracking and electromyography training game 通过混合眼动追踪和肌电图训练游戏提高轮椅使用者的尊严
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529612
Peter A. Smith, Matt Dombrowski, Shea McLinden, Calvin MacDonald, Devon Lynn, John Sparkman, Dominique Courbin, Albert Manero
Maintaining autonomous activities can be challenging for patients with neuromuscular disorders or quadriplegia, where control of joysticks for powered wheelchairs may not be feasible. Advancements in human machine interfaces have resulted in methods to capture the intentionality of the individual through non-traditional controls and communicating the users desires to a robotic interface. This research explores the design of a training game that teaches users to control a wheelchair through such a device that utilizes electromyography (EMG). The training game combines the use of EMG and eye tracking to enhance the impression of dignity while building self-efficacy and supporting autonomy for users. The system implements both eye tracking and surface electromyography, via the temporalis muscles, for gamified training and simulation of a novel wheelchair interface.
对于神经肌肉疾病或四肢瘫痪的患者来说,保持自主活动可能是一项挑战,在这些患者中,控制电动轮椅的操纵杆可能是不可行的。人机界面的进步导致了通过非传统控制和将用户的愿望传达给机器人界面来捕获个人意向性的方法。这项研究探索了一个训练游戏的设计,教用户通过这种利用肌电图(EMG)的设备来控制轮椅。该训练游戏结合了肌电图和眼动追踪的使用,以增强尊严的印象,同时建立自我效能感,并支持用户的自主性。该系统通过颞肌实现眼动追踪和表面肌电图,用于游戏化训练和模拟新型轮椅界面。
{"title":"Advancing dignity for adaptive wheelchair users via a hybrid eye tracking and electromyography training game","authors":"Peter A. Smith, Matt Dombrowski, Shea McLinden, Calvin MacDonald, Devon Lynn, John Sparkman, Dominique Courbin, Albert Manero","doi":"10.1145/3517031.3529612","DOIUrl":"https://doi.org/10.1145/3517031.3529612","url":null,"abstract":"Maintaining autonomous activities can be challenging for patients with neuromuscular disorders or quadriplegia, where control of joysticks for powered wheelchairs may not be feasible. Advancements in human machine interfaces have resulted in methods to capture the intentionality of the individual through non-traditional controls and communicating the users desires to a robotic interface. This research explores the design of a training game that teaches users to control a wheelchair through such a device that utilizes electromyography (EMG). The training game combines the use of EMG and eye tracking to enhance the impression of dignity while building self-efficacy and supporting autonomy for users. The system implements both eye tracking and surface electromyography, via the temporalis muscles, for gamified training and simulation of a novel wheelchair interface.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114216597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
User Perception of Smooth Pursuit Target Speed 用户感知平滑追求目标速度
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529234
Heiko Drewes, Sophia Sakel, H. Hussmann
Gaze-aware interfaces should work on all display sizes. This paper researches whether angular velocity or tangential speed should be kept when scaling a gaze-aware interface based on circular smooth pursuits to another display size. We also address the question of which target speed and which trajectory size feels most comfortable for the users. We present the results of a user study where the participants were asked how they perceived the speed and the radius of a circular moving smooth pursuit target. The data show that the users’ judgment of the optimal speed corresponds with an optimal detection rate. The results also enable us to give an optimal value pair for target speed and trajectory radius. Additionally, we give a functional relation on how to adapt the target speed when scaling the geometry to keep optimal detection rate and user experience.
注视感知界面应该适用于所有的显示尺寸。本文研究了将基于圆形平滑追踪的注视感知界面缩放到另一显示尺寸时,应保持角速度还是切向速度。我们还解决了哪个目标速度和哪个轨迹大小对用户来说最舒服的问题。我们展示了一项用户研究的结果,参与者被问及他们如何感知一个圆形平滑移动目标的速度和半径。数据表明,用户对最佳速度的判断与最佳检测率相对应。结果也使我们能够给出目标速度和轨迹半径的最优值对。此外,我们给出了在缩放几何图形时如何调整目标速度以保持最佳检测率和用户体验的函数关系。
{"title":"User Perception of Smooth Pursuit Target Speed","authors":"Heiko Drewes, Sophia Sakel, H. Hussmann","doi":"10.1145/3517031.3529234","DOIUrl":"https://doi.org/10.1145/3517031.3529234","url":null,"abstract":"Gaze-aware interfaces should work on all display sizes. This paper researches whether angular velocity or tangential speed should be kept when scaling a gaze-aware interface based on circular smooth pursuits to another display size. We also address the question of which target speed and which trajectory size feels most comfortable for the users. We present the results of a user study where the participants were asked how they perceived the speed and the radius of a circular moving smooth pursuit target. The data show that the users’ judgment of the optimal speed corresponds with an optimal detection rate. The results also enable us to give an optimal value pair for target speed and trajectory radius. Additionally, we give a functional relation on how to adapt the target speed when scaling the geometry to keep optimal detection rate and user experience.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121758770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fairness in Oculomotoric Biometric Identification 眼动生物特征识别的公平性
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529633
Paul Prasse, D. R. Reich, Silvia Makowski, L. Jäger, T. Scheffer
Gaze patterns are known to be highly individual, and therefore eye movements can serve as a biometric characteristic. We explore aspects of the fairness of biometric identification based on gaze patterns. We find that while oculomotoric identification does not favor any particular gender and does not significantly favor by age range, it is unfair with respect to ethnicity. Moreover, fairness concerning ethnicity cannot be achieved by balancing the training data for the best-performing model.
众所周知,注视模式是高度个性化的,因此眼球运动可以作为一种生物特征。我们探讨了基于注视模式的生物特征识别的公平性方面。我们发现,虽然动眼力识别并不倾向于任何特定的性别,也不明显倾向于年龄范围,但就种族而言,这是不公平的。此外,关于种族的公平性不能通过平衡表现最好的模型的训练数据来实现。
{"title":"Fairness in Oculomotoric Biometric Identification","authors":"Paul Prasse, D. R. Reich, Silvia Makowski, L. Jäger, T. Scheffer","doi":"10.1145/3517031.3529633","DOIUrl":"https://doi.org/10.1145/3517031.3529633","url":null,"abstract":"Gaze patterns are known to be highly individual, and therefore eye movements can serve as a biometric characteristic. We explore aspects of the fairness of biometric identification based on gaze patterns. We find that while oculomotoric identification does not favor any particular gender and does not significantly favor by age range, it is unfair with respect to ethnicity. Moreover, fairness concerning ethnicity cannot be achieved by balancing the training data for the best-performing model.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"10884 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116840720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2022 Symposium on Eye Tracking Research and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1