Interactive virtual characters

D. Thalmann, N. Magnenat-Thalmann
{"title":"Interactive virtual characters","authors":"D. Thalmann, N. Magnenat-Thalmann","doi":"10.1145/2542266.2542277","DOIUrl":null,"url":null,"abstract":"In this tutorial, we will describe both virtual characters and realistic humanoid social robots using the same high level models. Particularly, we will describe: 1. How to capture real-time gestures and facial emotions from real people, how to recognize any real person, how to recognize certain sounds. We will present a state of the art and some new avenues of research. 2. How to model a variety of interactive reactions of the virtual humans and social robots (facial expressions, gestures, multiparty dialog, etc) depending on the real scenes input parameters. 3. How we can define Virtual Characters that have an emotional behavior (personality and mood and emotions) and how to allow them to remember us and have a believable relationship with us. This part is to allow virtual humans and social robots to have an individual and not automatic behaviour. This tutorial will also address the modelling of long-term and short-term memory and the interactions between users and virtual humans based on gaze and how to model visual attention. We will explain different methods to identify user actions and how to allow Virtual Characters to answer to them. 4. This tutorial will also address the modelling of long-term and short-term memory and the interactions between users and virtual humans based on gaze and how to model visual attention. We will present the concepts of behavioral animation, group simulation, intercommunication between virtual humans, social humanoid robots and real people. Case studies will be presented from the Being There Centre (see http://imi.ntu.edu.sg/BeingThereCentre/Projects/Pages/Project4.aspx) where autonomous virtual humans and social robots react to a few actions from the real people.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Societal Automation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2542266.2542277","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this tutorial, we will describe both virtual characters and realistic humanoid social robots using the same high level models. Particularly, we will describe: 1. How to capture real-time gestures and facial emotions from real people, how to recognize any real person, how to recognize certain sounds. We will present a state of the art and some new avenues of research. 2. How to model a variety of interactive reactions of the virtual humans and social robots (facial expressions, gestures, multiparty dialog, etc) depending on the real scenes input parameters. 3. How we can define Virtual Characters that have an emotional behavior (personality and mood and emotions) and how to allow them to remember us and have a believable relationship with us. This part is to allow virtual humans and social robots to have an individual and not automatic behaviour. This tutorial will also address the modelling of long-term and short-term memory and the interactions between users and virtual humans based on gaze and how to model visual attention. We will explain different methods to identify user actions and how to allow Virtual Characters to answer to them. 4. This tutorial will also address the modelling of long-term and short-term memory and the interactions between users and virtual humans based on gaze and how to model visual attention. We will present the concepts of behavioral animation, group simulation, intercommunication between virtual humans, social humanoid robots and real people. Case studies will be presented from the Being There Centre (see http://imi.ntu.edu.sg/BeingThereCentre/Projects/Pages/Project4.aspx) where autonomous virtual humans and social robots react to a few actions from the real people.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
互动虚拟角色
在本教程中,我们将使用相同的高级模型描述虚拟角色和逼真的类人社交机器人。具体来说,我们将描述:1。如何捕捉真人的实时手势和面部情绪,如何识别真人,如何识别某些声音。我们将介绍最新的技术和一些新的研究途径。2. 如何根据真实场景输入参数,对虚拟人和社交机器人的各种交互反应(面部表情、手势、多方对话等)进行建模。3.我们如何定义具有情感行为(个性,情绪和情感)的虚拟角色,以及如何让他们记住我们并与我们建立可信的关系。这部分是为了让虚拟人和社交机器人有个人的行为,而不是自动的行为。本教程还将讨论长期和短期记忆的建模以及基于凝视的用户和虚拟人之间的交互以及如何建模视觉注意。我们将解释不同的方法来识别用户的行动,以及如何让虚拟人物回答他们。4. 本教程还将讨论长期和短期记忆的建模以及基于凝视的用户和虚拟人之间的交互以及如何建模视觉注意。我们将介绍行为动画,群体模拟,虚拟人,社交类人机器人和真人之间的相互交流的概念。案例研究将由BeingThere中心(见http://imi.ntu.edu.sg/BeingThereCentre/Projects/Pages/Project4.aspx)展示,在那里,自主的虚拟人和社交机器人会对真人的一些动作做出反应。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A fox tale Under the fold 3D interactive modeling with capturing instruction interface based on area limitation Dji. death fails Hyak-Ki Men: a study of framework for creating mixed reality entertainment
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1