Discovering eye gaze behavior during human-agent conversation in an interactive storytelling application

Nikolaus Bee, J. Wagner, E. André, Thurid Vogt, Fred Charles, D. Pizzi, M. Cavazza
{"title":"Discovering eye gaze behavior during human-agent conversation in an interactive storytelling application","authors":"Nikolaus Bee, J. Wagner, E. André, Thurid Vogt, Fred Charles, D. Pizzi, M. Cavazza","doi":"10.1145/1891903.1891915","DOIUrl":null,"url":null,"abstract":"In this paper, we investigate the user's eye gaze behavior during the conversation with an interactive storytelling application. We present an interactive eye gaze model for embodied conversational agents in order to improve the experience of users participating in Interactive Storytelling. The underlying narrative in which the approach was tested is based on a classical XIXth century psychological novel: Madame Bovary, by Flaubert. At various stages of the narrative, the user can address the main character or respond to her using free-style spoken natural language input, impersonating her lover. An eye tracker was connected to enable the interactive gaze model to respond to user's current gaze (i.e. looking into the virtual character's eyes or not). We conducted a study with 19 students where we compared our interactive eye gaze model with a non-interactive eye gaze model that was informed by studies of human gaze behaviors, but had no information on where the user was looking. The interactive model achieved a higher score for user ratings than the non-interactive model. In addition we analyzed the users' gaze behavior during the conversation with the virtual character.","PeriodicalId":181145,"journal":{"name":"ICMI-MLMI '10","volume":"107 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"27","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICMI-MLMI '10","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1891903.1891915","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 27

Abstract

In this paper, we investigate the user's eye gaze behavior during the conversation with an interactive storytelling application. We present an interactive eye gaze model for embodied conversational agents in order to improve the experience of users participating in Interactive Storytelling. The underlying narrative in which the approach was tested is based on a classical XIXth century psychological novel: Madame Bovary, by Flaubert. At various stages of the narrative, the user can address the main character or respond to her using free-style spoken natural language input, impersonating her lover. An eye tracker was connected to enable the interactive gaze model to respond to user's current gaze (i.e. looking into the virtual character's eyes or not). We conducted a study with 19 students where we compared our interactive eye gaze model with a non-interactive eye gaze model that was informed by studies of human gaze behaviors, but had no information on where the user was looking. The interactive model achieved a higher score for user ratings than the non-interactive model. In addition we analyzed the users' gaze behavior during the conversation with the virtual character.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在交互式故事叙述应用程序中发现人-代理对话中的眼睛注视行为
在这篇论文中,我们用一个交互式故事叙述应用程序来研究用户在对话过程中的眼睛注视行为。为了提高用户参与交互式故事叙述的体验,我们提出了一种针对具身会话代理的交互式眼睛注视模型。测试这种方法的基本叙事是基于19世纪经典的心理小说:福楼拜的《包法利夫人》。在叙述的不同阶段,用户可以使用自由风格的自然语言输入来称呼主角或回应她,模仿她的爱人。连接眼动仪,使交互式凝视模型能够响应用户当前的凝视(即是否直视虚拟角色的眼睛)。我们对19名学生进行了一项研究,将我们的交互式眼睛注视模型与非交互式眼睛注视模型进行了比较。非交互式眼睛注视模型是根据人类注视行为的研究得出的,但没有关于用户在看哪里的信息。交互模型比非交互模型获得了更高的用户评分。此外,我们还分析了用户在与虚拟角色对话时的凝视行为。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Feedback is... late: measuring multimodal delays in mobile device touchscreen interaction Conversation scene analysis based on dynamic Bayesian network and image-based gaze detection The Ambient Spotlight: personal multimodal search without query Musical performance as multimodal communication: drummers, musical collaborators, and listeners Automatic recognition of sign language subwords based on portable accelerometer and EMG sensors
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1