基于人类行为分析的表达自信/不自信的机器人行为设计

Haruka Sekino, Erina Kasano, Wei-Fen Hsieh, E. Sato-Shimokawara, Toru Yamaguchi
{"title":"基于人类行为分析的表达自信/不自信的机器人行为设计","authors":"Haruka Sekino, Erina Kasano, Wei-Fen Hsieh, E. Sato-Shimokawara, Toru Yamaguchi","doi":"10.1109/UR49135.2020.9144862","DOIUrl":null,"url":null,"abstract":"Dialogue robots have been actively researched. Many of these robots rely on merely using verbal information. However, human intention is conveyed using verbal information and nonverbal information. In order to convey intention as humans do, robots are necessary to express intention using verbal information and nonverbal information. This paper use speech information and head motion information to express confidence/unconfidence because they were useful features to estimate one’s confidence. First, human behavior expressing the presence or absence of confidence was collected from 8 participants. Human behavior was recorded by a microphone and a video camera. In order to select the behavior which is more understandable, the participants’ behavior was estimated for the confidence level by 3 estimators. Then the data of participants whose behavior was estimated to be more understandable were selected. The selected behavior was defined as representative speech feature and motion feature. Robot behavior was designed based on representative behavior. Finally, the experiment was conducted to evaluate the designed robot behavior. The robot behavior was estimated by 5 participants. The experiment results show that 3 participants estimated correctly the confidence/unconfidence behavior based on the representative speech feature. The differences between confidence and unconfidence of behavior are s the spent time before answer, the effective value of sound pressure, and utterance speed. Also, 3 participants estimated correctly the unconfidence behavior based on the representative motion features which are the longer spent time before answer and the bigger head rotation.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Robot Behavior Design Expressing Confidence/Unconfidence based on Human Behavior Analysis\",\"authors\":\"Haruka Sekino, Erina Kasano, Wei-Fen Hsieh, E. Sato-Shimokawara, Toru Yamaguchi\",\"doi\":\"10.1109/UR49135.2020.9144862\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Dialogue robots have been actively researched. Many of these robots rely on merely using verbal information. However, human intention is conveyed using verbal information and nonverbal information. In order to convey intention as humans do, robots are necessary to express intention using verbal information and nonverbal information. This paper use speech information and head motion information to express confidence/unconfidence because they were useful features to estimate one’s confidence. First, human behavior expressing the presence or absence of confidence was collected from 8 participants. Human behavior was recorded by a microphone and a video camera. In order to select the behavior which is more understandable, the participants’ behavior was estimated for the confidence level by 3 estimators. Then the data of participants whose behavior was estimated to be more understandable were selected. The selected behavior was defined as representative speech feature and motion feature. Robot behavior was designed based on representative behavior. Finally, the experiment was conducted to evaluate the designed robot behavior. The robot behavior was estimated by 5 participants. The experiment results show that 3 participants estimated correctly the confidence/unconfidence behavior based on the representative speech feature. The differences between confidence and unconfidence of behavior are s the spent time before answer, the effective value of sound pressure, and utterance speed. Also, 3 participants estimated correctly the unconfidence behavior based on the representative motion features which are the longer spent time before answer and the bigger head rotation.\",\"PeriodicalId\":360208,\"journal\":{\"name\":\"2020 17th International Conference on Ubiquitous Robots (UR)\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 17th International Conference on Ubiquitous Robots (UR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/UR49135.2020.9144862\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 17th International Conference on Ubiquitous Robots (UR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UR49135.2020.9144862","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

对话机器人已被积极研究。许多这样的机器人仅仅依靠口头信息。然而,人类的意图是通过语言信息和非语言信息来传达的。为了像人类一样传达意图,机器人有必要使用语言信息和非语言信息来表达意图。由于语音信息和头部动作信息是估计一个人的自信程度的有用特征,因此本文使用语音信息和头部动作信息来表达自信/不自信。首先,从8名参与者中收集了表达自信存在或不存在的人类行为。人类的行为由麦克风和摄像机记录下来。为了选择更容易理解的行为,我们用3个估计器对被试的行为进行置信水平估计。然后选择那些被认为行为更容易被理解的参与者的数据。所选择的行为被定义为具有代表性的语音特征和动作特征。基于代表性行为设计机器人行为。最后,通过实验对所设计的机器人行为进行了评价。机器人的行为由5名参与者估计。实验结果表明,3名被试基于代表性语音特征正确地估计了自信/不自信行为。自信与不自信的行为差异为回答前所用时间、声压有效值、语速。此外,3名被试根据回答前花的时间越长、头部转动越大的代表性动作特征,正确地估计了不自信行为。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Robot Behavior Design Expressing Confidence/Unconfidence based on Human Behavior Analysis
Dialogue robots have been actively researched. Many of these robots rely on merely using verbal information. However, human intention is conveyed using verbal information and nonverbal information. In order to convey intention as humans do, robots are necessary to express intention using verbal information and nonverbal information. This paper use speech information and head motion information to express confidence/unconfidence because they were useful features to estimate one’s confidence. First, human behavior expressing the presence or absence of confidence was collected from 8 participants. Human behavior was recorded by a microphone and a video camera. In order to select the behavior which is more understandable, the participants’ behavior was estimated for the confidence level by 3 estimators. Then the data of participants whose behavior was estimated to be more understandable were selected. The selected behavior was defined as representative speech feature and motion feature. Robot behavior was designed based on representative behavior. Finally, the experiment was conducted to evaluate the designed robot behavior. The robot behavior was estimated by 5 participants. The experiment results show that 3 participants estimated correctly the confidence/unconfidence behavior based on the representative speech feature. The differences between confidence and unconfidence of behavior are s the spent time before answer, the effective value of sound pressure, and utterance speed. Also, 3 participants estimated correctly the unconfidence behavior based on the representative motion features which are the longer spent time before answer and the bigger head rotation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Accuracy Improvement of Fisheye Stereo Camera by Combining Multiple Disparity Offset Maps Cloud Services for Culture Aware Conversation: Socially Assistive Robots and Virtual Assistants Robotic Path Planning for Inspection of Complex-Shaped Objects Prediction of expected Angle of knee joint of human lower limbs based on leg interaction A CNN-LSTM Hybrid Model for Ankle Joint Motion Recognition Method Based on sEMG
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1