首页 > 最新文献

Conference on Novel Gaze-Controlled Applications最新文献

英文 中文
Gaze interaction from bed 床上的凝视互动
Pub Date : 2011-05-26 DOI: 10.1145/1983302.1983313
J. P. Hansen, Javier San Agustin, H. Skovsgaard
This paper presents a low-cost gaze tracking solution for bedbound people composed of free-ware tracking software and commodity hardware. Gaze interaction is done on a large wall-projected image, visible to all people present in the room. The hardware equipment leaves physical space free to assist the person. Accuracy and precision of the tracking system was tested in an experiment with 12 subjects. We obtained a tracking quality that is sufficiently good to control applications designed for gaze interaction. The best tracking condition were achieved when people were sitting up compared to lying down. Also, gaze tracking in the bottom part of the image was found to be more precise than in the top part.
本文提出了一种由免费软件和商用硬件组成的低成本卧床患者注视跟踪解决方案。凝视互动是在一个巨大的墙壁投影图像上完成的,房间里的所有人都能看到。硬件设备为辅助人员留下了自由的物理空间。通过对12名受试者的实验,验证了跟踪系统的准确性和精密度。我们获得了足够好的跟踪质量来控制为凝视交互设计的应用程序。与躺着相比,人们坐着时的追踪效果最好。此外,在图像底部的凝视跟踪被发现比在图像顶部更精确。
{"title":"Gaze interaction from bed","authors":"J. P. Hansen, Javier San Agustin, H. Skovsgaard","doi":"10.1145/1983302.1983313","DOIUrl":"https://doi.org/10.1145/1983302.1983313","url":null,"abstract":"This paper presents a low-cost gaze tracking solution for bedbound people composed of free-ware tracking software and commodity hardware. Gaze interaction is done on a large wall-projected image, visible to all people present in the room. The hardware equipment leaves physical space free to assist the person. Accuracy and precision of the tracking system was tested in an experiment with 12 subjects. We obtained a tracking quality that is sufficiently good to control applications designed for gaze interaction. The best tracking condition were achieved when people were sitting up compared to lying down. Also, gaze tracking in the bottom part of the image was found to be more precise than in the top part.","PeriodicalId":184593,"journal":{"name":"Conference on Novel Gaze-Controlled Applications","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127062177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Gaze and voice controlled drawing 目光和声音控制绘图
Pub Date : 2011-05-26 DOI: 10.1145/1983302.1983311
J. Kamp, V. Sundstedt
Eye tracking is a process that allows an observers gaze to be determined in real time by measuring their eye movements. Recent work has examined the possibility of using gaze control as an alternative input modality in interactive applications. Alternative means of interaction are especially important for disabled users for whom traditional techniques, such as mouse and keyboard, may not be feasible. This paper proposes a novel combination of gaze and voice commands as a means of hands free interaction in a paint style program. A drawing application is implemented which is controllable by input from gaze and voice. Voice commands are used to activate drawing which allow gaze to be used only for positioning the cursor. In previous work gaze has also been used to activate drawing using dwell time. The drawing application is evaluated using subjective responses from participant user trials. The main result indicates that although gaze and voice offered less control that traditional input devices, the participants reported that it was more enjoyable.
眼动追踪是一种通过测量观察者的眼球运动来实时确定其凝视的过程。最近的工作研究了在交互式应用程序中使用凝视控制作为一种替代输入方式的可能性。对于使用鼠标和键盘等传统技术可能不可行的残疾用户来说,替代的交互方式尤为重要。本文提出了一种新颖的结合凝视和语音命令的方法,作为一种在绘画风格程序中自由交互的手段。实现了一个绘图应用程序,该应用程序可以通过凝视和语音输入进行控制。语音命令用于激活绘图,这使得凝视仅用于定位光标。在以前的工作中,凝视也被用来激活使用停留时间的绘画。绘图应用程序使用参与者用户试验的主观反应进行评估。主要结果表明,虽然凝视和声音提供的控制比传统输入设备少,但参与者报告说,它更令人愉快。
{"title":"Gaze and voice controlled drawing","authors":"J. Kamp, V. Sundstedt","doi":"10.1145/1983302.1983311","DOIUrl":"https://doi.org/10.1145/1983302.1983311","url":null,"abstract":"Eye tracking is a process that allows an observers gaze to be determined in real time by measuring their eye movements. Recent work has examined the possibility of using gaze control as an alternative input modality in interactive applications. Alternative means of interaction are especially important for disabled users for whom traditional techniques, such as mouse and keyboard, may not be feasible. This paper proposes a novel combination of gaze and voice commands as a means of hands free interaction in a paint style program. A drawing application is implemented which is controllable by input from gaze and voice. Voice commands are used to activate drawing which allow gaze to be used only for positioning the cursor. In previous work gaze has also been used to activate drawing using dwell time. The drawing application is evaluated using subjective responses from participant user trials. The main result indicates that although gaze and voice offered less control that traditional input devices, the participants reported that it was more enjoyable.","PeriodicalId":184593,"journal":{"name":"Conference on Novel Gaze-Controlled Applications","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132490486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Exploring interaction modes for image retrieval 探索图像检索的交互模式
Pub Date : 2011-05-26 DOI: 10.1145/1983302.1983312
C. Engelman, Rui Li, J. Pelz, P. Shi, Anne R. Haake
The number of digital images in use is growing at an increasing rate across a wide array of application domains. That being said, there is an ever-growing need for innovative ways to help endusers gain access to these images quickly and effectively. Moreover, it is becoming increasingly more difficult to manually annotate these images, for example with text labels, to generate useful metadata. One such method for helping users gain access to digital images is content-based image retrieval (CBIR). Practical use of CBIR systems has been limited by several "gaps", including the well-known semantic gap and usability gaps [1]. Innovative designs are needed to bring end users into the loop to bridge these gaps. Our human-centered approaches integrate human perception and multimodal interaction to facilitate more usable and effective image retrieval. Here we show that multi-touch interaction is more usable than gaze based interaction for explicit image region selection.
在广泛的应用领域中,使用的数字图像的数量正在以越来越快的速度增长。也就是说,越来越需要创新的方法来帮助最终用户快速有效地访问这些图像。此外,手动注释这些图像(例如使用文本标签)以生成有用的元数据变得越来越困难。其中一种帮助用户访问数字图像的方法是基于内容的图像检索(CBIR)。CBIR系统的实际使用受到几个“差距”的限制,包括众所周知的语义差距和可用性差距[1]。创新的设计需要将最终用户纳入循环中,以弥合这些差距。我们以人为中心的方法整合了人类感知和多模态交互,以促进更可用和有效的图像检索。在这里,我们证明了多点触摸交互比基于凝视的交互在明确的图像区域选择方面更有用。
{"title":"Exploring interaction modes for image retrieval","authors":"C. Engelman, Rui Li, J. Pelz, P. Shi, Anne R. Haake","doi":"10.1145/1983302.1983312","DOIUrl":"https://doi.org/10.1145/1983302.1983312","url":null,"abstract":"The number of digital images in use is growing at an increasing rate across a wide array of application domains. That being said, there is an ever-growing need for innovative ways to help endusers gain access to these images quickly and effectively. Moreover, it is becoming increasingly more difficult to manually annotate these images, for example with text labels, to generate useful metadata. One such method for helping users gain access to digital images is content-based image retrieval (CBIR). Practical use of CBIR systems has been limited by several \"gaps\", including the well-known semantic gap and usability gaps [1]. Innovative designs are needed to bring end users into the loop to bridge these gaps. Our human-centered approaches integrate human perception and multimodal interaction to facilitate more usable and effective image retrieval. Here we show that multi-touch interaction is more usable than gaze based interaction for explicit image region selection.","PeriodicalId":184593,"journal":{"name":"Conference on Novel Gaze-Controlled Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127566541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of a remote webcam-based eye tracker 基于远程网络摄像头的眼动仪的评估
Pub Date : 2011-05-26 DOI: 10.1145/1983302.1983309
H. Skovsgaard, Javier San Agustin, Sune Alstrup Johansen, J. P. Hansen, M. Tall
In this paper we assess the performance of an open-source gaze tracker in a remote (i.e. table-mounted) setup, and compare it with two other commercial eye trackers. An experiment with 5 subjects showed the open-source eye tracker to have a significantly higher level of accuracy than one of the commercial systems, Mirametrix S1, but also a higher error rate than the other commercial system, a Tobii T60. We conclude that the web-camera solution may be viable for people who need a substitute for the mouse input but cannot afford a commercial system.
在本文中,我们评估了一个开源的凝视跟踪器在远程(即桌面安装)设置中的性能,并将其与其他两个商业眼动仪进行了比较。一项有5名受试者参与的实验表明,开源眼动仪的准确率明显高于商业系统Mirametrix S1,但错误率也高于另一种商业系统Tobii T60。我们的结论是,网络摄像头的解决方案对于那些需要鼠标输入的替代品但又买不起商业系统的人来说是可行的。
{"title":"Evaluation of a remote webcam-based eye tracker","authors":"H. Skovsgaard, Javier San Agustin, Sune Alstrup Johansen, J. P. Hansen, M. Tall","doi":"10.1145/1983302.1983309","DOIUrl":"https://doi.org/10.1145/1983302.1983309","url":null,"abstract":"In this paper we assess the performance of an open-source gaze tracker in a remote (i.e. table-mounted) setup, and compare it with two other commercial eye trackers. An experiment with 5 subjects showed the open-source eye tracker to have a significantly higher level of accuracy than one of the commercial systems, Mirametrix S1, but also a higher error rate than the other commercial system, a Tobii T60. We conclude that the web-camera solution may be viable for people who need a substitute for the mouse input but cannot afford a commercial system.","PeriodicalId":184593,"journal":{"name":"Conference on Novel Gaze-Controlled Applications","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115027751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Comparison of gaze-to-objects mapping algorithms 注视对象映射算法的比较
Pub Date : 2011-05-26 DOI: 10.1145/1983302.1983308
O. Špakov
Gaze data processing is an important and necessary step in gaze-based applications. This study focuses on the comparison of several gaze-to-object mapping algorithms using various dwell times for selection and presenting targets of several types and sizes. Seven algorithms found in literature were compared against two newly designed algorithms. The study revealed that a fractional mapping algorithm (known) has produced the highest rate of correct selections and fastest selection times, but also the highest rate of incorrect selections. The dynamic competing algorithm (designed) has shown the next best result, but also high rate of incorrect selections. A small impact on the type of target to the calculated statistics has been observed. A strictly centered gazing has helped to increase the rate of correct selections for all algorithms and types of targets. The directions for further mapping algorithms improvement and future investigation have been explained.
在基于注视的应用中,注视数据处理是一个重要而必要的步骤。本研究的重点是使用不同停留时间来选择和呈现不同类型和大小的目标的几种凝视到目标映射算法的比较。将文献中已有的7种算法与两种新设计的算法进行比较。研究表明,分数映射算法(已知)产生了最高的正确选择率和最快的选择时间,但也产生了最高的错误选择率。所设计的动态竞争算法显示出次优结果,但也有较高的错误率。已观察到目标类型对计算统计量的影响很小。严格的中心注视有助于提高所有算法和目标类型的正确选择率。说明了进一步改进映射算法和未来研究的方向。
{"title":"Comparison of gaze-to-objects mapping algorithms","authors":"O. Špakov","doi":"10.1145/1983302.1983308","DOIUrl":"https://doi.org/10.1145/1983302.1983308","url":null,"abstract":"Gaze data processing is an important and necessary step in gaze-based applications. This study focuses on the comparison of several gaze-to-object mapping algorithms using various dwell times for selection and presenting targets of several types and sizes. Seven algorithms found in literature were compared against two newly designed algorithms. The study revealed that a fractional mapping algorithm (known) has produced the highest rate of correct selections and fastest selection times, but also the highest rate of incorrect selections. The dynamic competing algorithm (designed) has shown the next best result, but also high rate of incorrect selections. A small impact on the type of target to the calculated statistics has been observed. A strictly centered gazing has helped to increase the rate of correct selections for all algorithms and types of targets. The directions for further mapping algorithms improvement and future investigation have been explained.","PeriodicalId":184593,"journal":{"name":"Conference on Novel Gaze-Controlled Applications","volume":"255 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114546911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Mobile gaze-based screen interaction in 3D environments 在3D环境中基于移动视线的屏幕交互
Pub Date : 2011-05-26 DOI: 10.1145/1983302.1983304
D. Mardanbegi, D. Hansen
Head-mounted eye trackers can be used for mobile interaction as well as gaze estimation purposes. This paper presents a method that enables the user to interact with any planar digital display in a 3D environment using a head-mounted eye tracker. An effective method for identifying the screens in the field of view of the user is also presented which can be applied in a general scenario in which multiple users can interact with multiple screens. A particular application of using this technique is implemented in a home environment with two big screens and a mobile phone. In this application a user was able to interact with these screens using a wireless head-mounted eye tracker.
头戴式眼动仪可用于移动交互以及凝视估计目的。本文提出了一种使用头戴式眼动仪使用户能够在三维环境中与任何平面数字显示器进行交互的方法。还提出了一种识别用户视场中屏幕的有效方法,该方法可应用于多个用户可与多个屏幕交互的一般场景。使用这种技术的一个特殊应用是在一个有两个大屏幕和一个移动电话的家庭环境中实现的。在这个应用程序中,用户能够使用无线头戴式眼动仪与这些屏幕进行交互。
{"title":"Mobile gaze-based screen interaction in 3D environments","authors":"D. Mardanbegi, D. Hansen","doi":"10.1145/1983302.1983304","DOIUrl":"https://doi.org/10.1145/1983302.1983304","url":null,"abstract":"Head-mounted eye trackers can be used for mobile interaction as well as gaze estimation purposes. This paper presents a method that enables the user to interact with any planar digital display in a 3D environment using a head-mounted eye tracker. An effective method for identifying the screens in the field of view of the user is also presented which can be applied in a general scenario in which multiple users can interact with multiple screens. A particular application of using this technique is implemented in a home environment with two big screens and a mobile phone. In this application a user was able to interact with these screens using a wireless head-mounted eye tracker.","PeriodicalId":184593,"journal":{"name":"Conference on Novel Gaze-Controlled Applications","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128662359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Towards intelligent user interfaces: anticipating actions in computer games 迈向智能用户界面:预测电脑游戏中的动作
Pub Date : 2011-05-26 DOI: 10.1145/1983302.1983306
Hendrik Koesling, A. Kenny, A. Finke, H. Ritter, S. McLoone, Tomas E. Ward
The study demonstrates how the on-line processing of eye movements in First Person Shooter (FPS) games helps to predict player decisions regarding subsequent actions. Based on action-control theory, we identify distinct cognitive orientations in pre- and post-decisional phases. Cognitive orientations differ with regard to the width of attention or "re-ceptiveness": In the pre-decisional phase players process as much information as possible and then focus on implementing intended actions in the post-decisional phase. Participants viewed animated sequences of FPS games and decided which game character to rescue and how to implement their action. Oculomotor data shows a clear distinction between the width of attention in pre- and post-decisional phases, supporting the Rubicon model of action phases. Attention rapidly narrows when the goal intention is formed. We identify a lag of 800--900 ms between goal formation ("cognitive Rubicon") and motor response. Game engines may use this lag to anticipatively respond to actions that players have not executed yet. User interfaces with a gaze-dependent, gaze-controlled anticipation module should thus enhance game character behaviours and make them much "smarter".
该研究展示了第一人称射击游戏(FPS)中眼球运动的在线处理如何帮助预测玩家关于后续行动的决策。基于行动控制理论,我们确定了不同的认知取向在决策前和决策后阶段。认知取向在注意广度或“接受度”方面有所不同:在决策前阶段,玩家处理尽可能多的信息,然后在决策后阶段专注于实施预期的行动。参与者观看FPS游戏的动画序列,并决定要拯救哪个游戏角色以及如何执行他们的行动。眼球运动数据显示,在决策前阶段和决策后阶段,注意广度之间存在明显区别,这支持了行动阶段的卢比孔河模型。当目标意向形成时,注意力迅速缩小。我们发现在目标形成(“认知卢比孔河”)和运动反应之间有800- 900毫秒的滞后。游戏引擎可能会使用这种延迟来预测玩家尚未执行的动作。因此,带有注视依赖、注视控制预期模块的用户界面应该增强游戏角色的行为,让他们变得更“聪明”。
{"title":"Towards intelligent user interfaces: anticipating actions in computer games","authors":"Hendrik Koesling, A. Kenny, A. Finke, H. Ritter, S. McLoone, Tomas E. Ward","doi":"10.1145/1983302.1983306","DOIUrl":"https://doi.org/10.1145/1983302.1983306","url":null,"abstract":"The study demonstrates how the on-line processing of eye movements in First Person Shooter (FPS) games helps to predict player decisions regarding subsequent actions. Based on action-control theory, we identify distinct cognitive orientations in pre- and post-decisional phases. Cognitive orientations differ with regard to the width of attention or \"re-ceptiveness\": In the pre-decisional phase players process as much information as possible and then focus on implementing intended actions in the post-decisional phase. Participants viewed animated sequences of FPS games and decided which game character to rescue and how to implement their action. Oculomotor data shows a clear distinction between the width of attention in pre- and post-decisional phases, supporting the Rubicon model of action phases. Attention rapidly narrows when the goal intention is formed. We identify a lag of 800--900 ms between goal formation (\"cognitive Rubicon\") and motor response. Game engines may use this lag to anticipatively respond to actions that players have not executed yet. User interfaces with a gaze-dependent, gaze-controlled anticipation module should thus enhance game character behaviours and make them much \"smarter\".","PeriodicalId":184593,"journal":{"name":"Conference on Novel Gaze-Controlled Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126053476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Hyakunin-Eyesshu: a tabletop Hyakunin-Isshu game with computer opponent by the action prediction based on gaze detection hyakunin - eyeshu:一款基于注视检测的动作预测与电脑对手的桌面Hyakunin-Isshu游戏
Pub Date : 2011-05-26 DOI: 10.1145/1983302.1983307
Michiya Yamamoto, M. Komeda, Takashi Nagamatsu, Tomio Watanabe
A tabletop interface can enable interactions between images and real objects using various sensors; therefore, it can be used to create many works in the media arts field. By focusing on gaze-and-touch interaction, we proposed the concept of an eye-tracking tabletop interface (ETTI) as a new type of interaction interface for the creation of media artworks. In this study, we developed "Hyakunin-Eyesshu," a prototype for ETTI content that enables users to play the traditional Japanese card game "Hyakunin-Isshu" with a computer character. In addition, we demonstrated this system at an academic meeting and obtained user feedback. We expect that our work will lead to advancements in interfaces for various interactions and to various new media artworks with precise gaze estimation.
桌面界面可以使用各种传感器实现图像和真实物体之间的交互;因此,它可以在媒体艺术领域创作出许多作品。我们以注视与触摸交互为重点,提出了眼动桌面界面(eye-tracking tabletop interface, ETTI)的概念,作为媒体艺术创作的一种新型交互界面。在这项研究中,我们开发了“hyakunin - eyeshu”,这是ETTI内容的原型,使用户能够与计算机角色一起玩传统的日本纸牌游戏“Hyakunin-Isshu”。此外,我们在一次学术会议上演示了该系统,并获得了用户的反馈。我们期望我们的工作将导致各种交互界面的进步,以及各种具有精确凝视估计的新媒体艺术作品。
{"title":"Hyakunin-Eyesshu: a tabletop Hyakunin-Isshu game with computer opponent by the action prediction based on gaze detection","authors":"Michiya Yamamoto, M. Komeda, Takashi Nagamatsu, Tomio Watanabe","doi":"10.1145/1983302.1983307","DOIUrl":"https://doi.org/10.1145/1983302.1983307","url":null,"abstract":"A tabletop interface can enable interactions between images and real objects using various sensors; therefore, it can be used to create many works in the media arts field. By focusing on gaze-and-touch interaction, we proposed the concept of an eye-tracking tabletop interface (ETTI) as a new type of interaction interface for the creation of media artworks. In this study, we developed \"Hyakunin-Eyesshu,\" a prototype for ETTI content that enables users to play the traditional Japanese card game \"Hyakunin-Isshu\" with a computer character. In addition, we demonstrated this system at an academic meeting and obtained user feedback. We expect that our work will lead to advancements in interfaces for various interactions and to various new media artworks with precise gaze estimation.","PeriodicalId":184593,"journal":{"name":"Conference on Novel Gaze-Controlled Applications","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130695979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Eye tracking within the packaging design workflow: interaction with physical and virtual shelves 包装设计工作流程中的眼动追踪:与物理和虚拟货架的互动
Pub Date : 2011-05-26 DOI: 10.1145/1983302.1983305
C. Tonkin, Andrew D. Ouzts, A. Duchowski
Measuring consumers' overt visual attention through eye tracking is a useful method of assessing a package design's impact on likely buyer purchase patterns. To preserve ecological validity, subjects should remain immersed in a shopping context throughout the entire study. Immersion can be achieved through proper priming, environmental cues, and visual stimuli. While a complete physical store offers the most realistic environment, the use of projectors in creating a virtual environment is desirable for efficiency, cost, and flexibility reasons. Results are presented from a study comparing consumers' visual behavior in the presence of either virtual or physical shelving through eye movement performance and process metrics and their subjective impressions. Analysis suggests a difference in visual search performance between environments even though the perceived difference is negligible.
通过眼动追踪来测量消费者明显的视觉注意力是评估包装设计对潜在买家购买模式影响的有效方法。为了保持生态有效性,在整个研究过程中,受试者应始终沉浸在购物环境中。沉浸感可以通过适当的启动、环境线索和视觉刺激来实现。虽然完整的实体店提供了最真实的环境,但出于效率、成本和灵活性的原因,使用投影仪创建虚拟环境是可取的。结果来自一项研究,通过眼动表现和过程指标以及他们的主观印象,比较了消费者在虚拟或物理货架存在时的视觉行为。分析表明,不同环境之间的视觉搜索性能存在差异,尽管感知到的差异可以忽略不计。
{"title":"Eye tracking within the packaging design workflow: interaction with physical and virtual shelves","authors":"C. Tonkin, Andrew D. Ouzts, A. Duchowski","doi":"10.1145/1983302.1983305","DOIUrl":"https://doi.org/10.1145/1983302.1983305","url":null,"abstract":"Measuring consumers' overt visual attention through eye tracking is a useful method of assessing a package design's impact on likely buyer purchase patterns. To preserve ecological validity, subjects should remain immersed in a shopping context throughout the entire study. Immersion can be achieved through proper priming, environmental cues, and visual stimuli. While a complete physical store offers the most realistic environment, the use of projectors in creating a virtual environment is desirable for efficiency, cost, and flexibility reasons. Results are presented from a study comparing consumers' visual behavior in the presence of either virtual or physical shelving through eye movement performance and process metrics and their subjective impressions. Analysis suggests a difference in visual search performance between environments even though the perceived difference is negligible.","PeriodicalId":184593,"journal":{"name":"Conference on Novel Gaze-Controlled Applications","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115486991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Designing gaze-supported multimodal interactions for the exploration of large image collections 为探索大型图像集设计支持凝视的多模式交互
Pub Date : 2011-05-26 DOI: 10.1145/1983302.1983303
S. Stellmach, S. Stober, A. Nürnberger, Raimund Dachselt
While eye tracking is becoming more and more relevant as a promising input channel, diverse applications using gaze control in a more natural way are still rather limited. Though several researchers have indicated the particular high potential of gaze-based interaction for pointing tasks, often gaze-only approaches are investigated. However, time-consuming dwell-time activations limit this potential. To overcome this, we present a gaze-supported fisheye lens in combination with (1) a keyboard and (2) and a tilt-sensitive mobile multi-touch device. In a user-centered design approach, we elicited how users would use the aforementioned input combinations. Based on the received feedback we designed a prototype system for the interaction with a remote display using gaze and a touch-and-tilt device. This eliminates gaze dwell-time activations and the well-known Midas Touch problem (unintentionally issuing an action via gaze). A formative user study testing our prototype provided further insights into how well the elaborated gaze-supported interaction techniques were experienced by users.
虽然眼动追踪作为一种有前途的输入渠道变得越来越重要,但以更自然的方式使用注视控制的各种应用仍然相当有限。尽管一些研究人员已经指出,在指向任务中,基于凝视的交互具有很高的潜力,但通常只研究凝视的方法。然而,耗时的驻留时间激活限制了这种潜力。为了克服这个问题,我们提出了一个支持凝视的鱼眼镜头,结合了(1)键盘和(2)倾斜敏感移动多点触摸设备。在以用户为中心的设计方法中,我们引出了用户将如何使用上述输入组合。根据收到的反馈,我们设计了一个原型系统,用于使用凝视和触摸倾斜设备与远程显示器进行交互。这消除了凝视驻留时间激活和著名的点石成问题(无意中通过凝视发出一个动作)。对我们的原型进行的形成性的用户研究提供了进一步的见解,以了解用户对精心设计的注视支持的交互技术的体验情况。
{"title":"Designing gaze-supported multimodal interactions for the exploration of large image collections","authors":"S. Stellmach, S. Stober, A. Nürnberger, Raimund Dachselt","doi":"10.1145/1983302.1983303","DOIUrl":"https://doi.org/10.1145/1983302.1983303","url":null,"abstract":"While eye tracking is becoming more and more relevant as a promising input channel, diverse applications using gaze control in a more natural way are still rather limited. Though several researchers have indicated the particular high potential of gaze-based interaction for pointing tasks, often gaze-only approaches are investigated. However, time-consuming dwell-time activations limit this potential. To overcome this, we present a gaze-supported fisheye lens in combination with (1) a keyboard and (2) and a tilt-sensitive mobile multi-touch device. In a user-centered design approach, we elicited how users would use the aforementioned input combinations. Based on the received feedback we designed a prototype system for the interaction with a remote display using gaze and a touch-and-tilt device. This eliminates gaze dwell-time activations and the well-known Midas Touch problem (unintentionally issuing an action via gaze). A formative user study testing our prototype provided further insights into how well the elaborated gaze-supported interaction techniques were experienced by users.","PeriodicalId":184593,"journal":{"name":"Conference on Novel Gaze-Controlled Applications","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130278539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
期刊
Conference on Novel Gaze-Controlled Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1