The MOBOT human-robot communication model

Stavroula-Evita Fotinea, E. Efthimiou, Maria Koutsombogera, Athanasia-Lida Dimou, Theodore Goulas, P. Maragos, C. Tzafestas
{"title":"The MOBOT human-robot communication model","authors":"Stavroula-Evita Fotinea, E. Efthimiou, Maria Koutsombogera, Athanasia-Lida Dimou, Theodore Goulas, P. Maragos, C. Tzafestas","doi":"10.1109/COGINFOCOM.2015.7390590","DOIUrl":null,"url":null,"abstract":"This paper reports on work related to the modelling of Human-Robot Communication on the basis of multimodal and multisensory human behaviour analysis. A primary focus in this framework of analysis is the definition of semantics of human actions, i.e. verbal and non-verbal signals, in a specific context with distinct Human-Robot interaction states. These states are captured and represented in terms of communicative behavioural patterns that influence, and in turn are adapted to the interaction flow with the goal to feed a multimodal human-robot communication system. This multimodal HRI model is defined upon, and ensures the usability of a multimodal sensory corpus acquired as a primary source of data retrieval, analysis and testing of mobility assistive robot prototypes.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"160 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COGINFOCOM.2015.7390590","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

Abstract

This paper reports on work related to the modelling of Human-Robot Communication on the basis of multimodal and multisensory human behaviour analysis. A primary focus in this framework of analysis is the definition of semantics of human actions, i.e. verbal and non-verbal signals, in a specific context with distinct Human-Robot interaction states. These states are captured and represented in terms of communicative behavioural patterns that influence, and in turn are adapted to the interaction flow with the goal to feed a multimodal human-robot communication system. This multimodal HRI model is defined upon, and ensures the usability of a multimodal sensory corpus acquired as a primary source of data retrieval, analysis and testing of mobility assistive robot prototypes.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MOBOT人机通信模型
本文报道了基于多模态和多感官人类行为分析的人机通信建模相关工作。这个分析框架的主要焦点是人类行为的语义定义,即在具有不同人机交互状态的特定上下文中的语言和非语言信号。这些状态被捕获并以影响的交流行为模式表示,并相应地适应交互流,目标是为多模态人机通信系统提供信息。这个多模态HRI模型被定义,并确保了作为数据检索、分析和测试移动辅助机器人原型的主要来源的多模态感官语料库的可用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Brain activity measured with fNIRS for the prediction of cognitive workload Towards computer-assisted language learning with robots, wikipedia and CogInfoCom A motivational model of hunger for a cognitive architecture Distributed processing of biological interactions using Hadoop Mobile applications for traffic safety
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1