沉浸式协同虚拟环境中以对象为中心的多方交互中虚拟角色眼睛注视控制的眼动追踪

W. Steptoe, Oyewole Oyekoya, A. Murgia, R. Wolff, John P Rae, Estefania Guimaraes, D. Roberts, A. Steed
{"title":"沉浸式协同虚拟环境中以对象为中心的多方交互中虚拟角色眼睛注视控制的眼动追踪","authors":"W. Steptoe, Oyewole Oyekoya, A. Murgia, R. Wolff, John P Rae, Estefania Guimaraes, D. Roberts, A. Steed","doi":"10.1109/VR.2009.4811003","DOIUrl":null,"url":null,"abstract":"In face-to-face collaboration, eye gaze is used both as a bidirectional signal to monitor and indicate focus of attention and action, as well as a resource to manage the interaction. In remote interaction supported by Immersive Collaborative Virtual Environments (ICVEs), embodied avatars representing and controlled by each participant share a virtual space. We report on a study designed to evaluate methods of avatar eye gaze control during an object-focused puzzle scenario performed between three networked CAVETM-like systems. We compare tracked gaze, in which avatars' eyes are controlled by head-mounted mobile eye trackers worn by participants, to a gaze model informed by head orientation for saccade generation, and static gaze featuring non-moving eyes. We analyse task performance, subjective user experience, and interactional behaviour. While not providing statistically significant benefit over static gaze, tracked gaze is observed as the highest performing condition. However, the gaze model resulted in significantly lower task performance and increased error rate.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"43","resultStr":"{\"title\":\"Eye Tracking for Avatar Eye Gaze Control During Object-Focused Multiparty Interaction in Immersive Collaborative Virtual Environments\",\"authors\":\"W. Steptoe, Oyewole Oyekoya, A. Murgia, R. Wolff, John P Rae, Estefania Guimaraes, D. Roberts, A. Steed\",\"doi\":\"10.1109/VR.2009.4811003\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In face-to-face collaboration, eye gaze is used both as a bidirectional signal to monitor and indicate focus of attention and action, as well as a resource to manage the interaction. In remote interaction supported by Immersive Collaborative Virtual Environments (ICVEs), embodied avatars representing and controlled by each participant share a virtual space. We report on a study designed to evaluate methods of avatar eye gaze control during an object-focused puzzle scenario performed between three networked CAVETM-like systems. We compare tracked gaze, in which avatars' eyes are controlled by head-mounted mobile eye trackers worn by participants, to a gaze model informed by head orientation for saccade generation, and static gaze featuring non-moving eyes. We analyse task performance, subjective user experience, and interactional behaviour. While not providing statistically significant benefit over static gaze, tracked gaze is observed as the highest performing condition. However, the gaze model resulted in significantly lower task performance and increased error rate.\",\"PeriodicalId\":433266,\"journal\":{\"name\":\"2009 IEEE Virtual Reality Conference\",\"volume\":\"33 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2009-03-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"43\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2009 IEEE Virtual Reality Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/VR.2009.4811003\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 IEEE Virtual Reality Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VR.2009.4811003","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 43

摘要

在面对面的协作中,眼睛凝视既是一种双向信号,用来监测和指示注意力和行动的焦点,也是一种管理互动的资源。在沉浸式协作虚拟环境(ICVEs)支持的远程交互中,由每个参与者代表和控制的化身共享虚拟空间。我们报告了一项研究,旨在评估在三个网络CAVETM-like系统之间执行的对象聚焦谜题场景中虚拟角色眼睛注视控制方法。我们比较了跟踪凝视,其中角色的眼睛由参与者佩戴的头戴式移动眼动仪控制,注视模型由头部方向决定,以产生扫视,静态凝视具有不移动的眼睛。我们分析任务性能、主观用户体验和交互行为。虽然与静态凝视相比没有统计学上的显著优势,但跟踪凝视被认为是表现最好的状态。然而,注视模型显著降低了任务性能,增加了错误率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Eye Tracking for Avatar Eye Gaze Control During Object-Focused Multiparty Interaction in Immersive Collaborative Virtual Environments
In face-to-face collaboration, eye gaze is used both as a bidirectional signal to monitor and indicate focus of attention and action, as well as a resource to manage the interaction. In remote interaction supported by Immersive Collaborative Virtual Environments (ICVEs), embodied avatars representing and controlled by each participant share a virtual space. We report on a study designed to evaluate methods of avatar eye gaze control during an object-focused puzzle scenario performed between three networked CAVETM-like systems. We compare tracked gaze, in which avatars' eyes are controlled by head-mounted mobile eye trackers worn by participants, to a gaze model informed by head orientation for saccade generation, and static gaze featuring non-moving eyes. We analyse task performance, subjective user experience, and interactional behaviour. While not providing statistically significant benefit over static gaze, tracked gaze is observed as the highest performing condition. However, the gaze model resulted in significantly lower task performance and increased error rate.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Creating Virtual 3D See-Through Experiences on Large-size 2D Displays A Game Theoretic Approach for Modeling User-System Interaction in Networked Virtual Environments Explosion Diagrams in Augmented Reality Multiple Behaviors Generation by 1 D.O.F. Mobile Robot Efficient Large-Scale Sweep and Prune Methods with AABB Insertion and Removal
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1