首页 > 最新文献

Proceedings of the 5th International Conference on Human Agent Interaction最新文献

英文 中文
Expectations and First Experience with a Social Robot 对社交机器人的期望和第一次体验
Pub Date : 2017-10-17 DOI: 10.1145/3125739.3132610
Kristiina Jokinen, G. Wilcock
This paper concerns interaction with social robots and focuses on the evaluation of a robot application that allows users to access interesting information from Wikipedia. The evaluation method compares the users' expectations with their experience with the robot, and takes into account their self-declared previous experience with robots. The results show that most participants had an overall positive experience, even though the averages indicate a slight negative tendency related to expectations of the robot's behavior and being understood by the robot. Interestingly, the most experienced users seem to be the most critical.
本文关注与社交机器人的交互,并着重于评估一个允许用户从维基百科访问有趣信息的机器人应用程序。评估方法将用户的期望与他们使用机器人的经验进行比较,并考虑到他们自己声明的以前使用机器人的经验。结果显示,大多数参与者总体上都有积极的体验,尽管平均水平表明,对机器人行为的期望和被机器人理解有轻微的消极倾向。有趣的是,最有经验的用户似乎是最挑剔的。
{"title":"Expectations and First Experience with a Social Robot","authors":"Kristiina Jokinen, G. Wilcock","doi":"10.1145/3125739.3132610","DOIUrl":"https://doi.org/10.1145/3125739.3132610","url":null,"abstract":"This paper concerns interaction with social robots and focuses on the evaluation of a robot application that allows users to access interesting information from Wikipedia. The evaluation method compares the users' expectations with their experience with the robot, and takes into account their self-declared previous experience with robots. The results show that most participants had an overall positive experience, even though the averages indicate a slight negative tendency related to expectations of the robot's behavior and being understood by the robot. Interestingly, the most experienced users seem to be the most critical.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123004079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Designing Emotionally Expressive Robots: A Comparative Study on the Perception of Communication Modalities 设计情感表达机器人:交流方式感知的比较研究
Pub Date : 2017-10-17 DOI: 10.1145/3125739.3125744
Christiana Tsiourti, A. Weiss, K. Wac, M. Vincze
Socially assistive agents, be it virtual avatars or robots, need to engage in social interactions with humans and express their internal emotional states, goals, and desires. In this work, we conducted a comparative study to investigate how humans perceive emotional cues expressed by humanoid robots through five communication modalities (face, head, body, voice, locomotion) and examined whether the degree of a robot's human-like embodiment affects this perception. In an online survey, we asked people to identify emotions communicated by Pepper - a highly human-like robot and Hobbit - a robot with abstract humanlike features. A qualitative and quantitative data analysis confirmed the expressive power of the face, but also demonstrated that body expressions or even simple head and locomotion movements could convey emotional information. These findings suggest that emotion recognition accuracy varies as a function of the modality, and a higher degree of anthropomorphism does not necessarily lead to a higher level of recognition accuracy. Our results further the understanding of how people respond to single communication modalities and have implications for designing recognizable multimodal expressions for robots.
社交辅助代理,无论是虚拟化身还是机器人,都需要参与与人类的社交互动,并表达他们的内在情绪状态、目标和愿望。在这项工作中,我们进行了一项比较研究,探讨人类如何通过五种交流方式(面部、头部、身体、声音、运动)感知类人机器人表达的情感线索,并研究机器人的类人化体现程度是否会影响这种感知。在一项在线调查中,我们要求人们识别“小辣椒”(一个非常像人类的机器人)和“霍比特人”(一个具有抽象人类特征的机器人)所传达的情感。定性和定量数据分析证实了面部的表现力,但也证明了身体表情甚至简单的头部和运动动作都可以传达情感信息。这些发现表明,情感识别的准确性随情态的变化而变化,更高程度的拟人化并不一定导致更高水平的识别准确性。我们的研究结果进一步理解了人们对单一通信方式的反应,并对设计可识别的机器人多模态表达具有启示意义。
{"title":"Designing Emotionally Expressive Robots: A Comparative Study on the Perception of Communication Modalities","authors":"Christiana Tsiourti, A. Weiss, K. Wac, M. Vincze","doi":"10.1145/3125739.3125744","DOIUrl":"https://doi.org/10.1145/3125739.3125744","url":null,"abstract":"Socially assistive agents, be it virtual avatars or robots, need to engage in social interactions with humans and express their internal emotional states, goals, and desires. In this work, we conducted a comparative study to investigate how humans perceive emotional cues expressed by humanoid robots through five communication modalities (face, head, body, voice, locomotion) and examined whether the degree of a robot's human-like embodiment affects this perception. In an online survey, we asked people to identify emotions communicated by Pepper - a highly human-like robot and Hobbit - a robot with abstract humanlike features. A qualitative and quantitative data analysis confirmed the expressive power of the face, but also demonstrated that body expressions or even simple head and locomotion movements could convey emotional information. These findings suggest that emotion recognition accuracy varies as a function of the modality, and a higher degree of anthropomorphism does not necessarily lead to a higher level of recognition accuracy. Our results further the understanding of how people respond to single communication modalities and have implications for designing recognizable multimodal expressions for robots.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126380643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Symbol Emergence in Robotics for Modeling Human-Agent Interaction 机器人学中用于人与智能体交互建模的符号涌现
Pub Date : 2017-10-17 DOI: 10.1145/3125739.3134522
T. Nagai
Human intelligence is deeply dependent on its physical body, and its development requires interaction between its own body and surrounding environment including other agents. However, it is still an open problem that how we can integrate the low level motor control and the high level symbol manipulation system. One of our research goals in the area called "symbol emergence in robotics" is to build a computational model of human intelligence from the motor control to the high level symbol manipulation. In this talk, an unsupervised on-line learning algorithm, which uses a hierarchical Bayesian framework for categorizing multimodal sensory signals such as audio, visual, and haptic information by robots, is introduced at first. The robot uses its physical body to grasp and observe an object from various viewpoints as well as listen to the sound during the observation. The basic algorithm for intelligence is to categorize the collected multimodal data so that the robot can infer unobserved information better and we call the generated categorizes as multimodal concepts. The latter half of this talk discusses an integrated computational model of human intelligence from the motor control to the high level cognition. The core idea is to integrate the multimodal concepts and reinforcement learning. Furthermore, this talk attempts to model communication within the same framework since the self-other discrimination process can be seen as the multimodal categorization of sensory-motor signals.
人类的智能深深依赖于它的身体,它的发展需要自身的身体和周围环境包括其他主体的相互作用。然而,如何将低电平电机控制与高电平符号操作系统相结合,仍然是一个有待解决的问题。我们在“机器人中的符号出现”领域的研究目标之一是建立一个从运动控制到高级符号操作的人类智能计算模型。在本次演讲中,首先介绍了一种无监督在线学习算法,该算法使用分层贝叶斯框架对机器人的多模态感官信号(如音频、视觉和触觉信息)进行分类。机器人利用它的身体从不同的角度抓取和观察一个物体,并在观察过程中倾听声音。智能的基本算法是对收集到的多模态数据进行分类,使机器人能够更好地推断未观察到的信息,我们将生成的分类称为多模态概念。本讲座的后半部分讨论了从运动控制到高层次认知的人类智能综合计算模型。其核心思想是将多模态概念与强化学习相结合。此外,本演讲试图在相同的框架内建立沟通模型,因为自我-他人区分过程可以被视为感觉-运动信号的多模态分类。
{"title":"Symbol Emergence in Robotics for Modeling Human-Agent Interaction","authors":"T. Nagai","doi":"10.1145/3125739.3134522","DOIUrl":"https://doi.org/10.1145/3125739.3134522","url":null,"abstract":"Human intelligence is deeply dependent on its physical body, and its development requires interaction between its own body and surrounding environment including other agents. However, it is still an open problem that how we can integrate the low level motor control and the high level symbol manipulation system. One of our research goals in the area called \"symbol emergence in robotics\" is to build a computational model of human intelligence from the motor control to the high level symbol manipulation. In this talk, an unsupervised on-line learning algorithm, which uses a hierarchical Bayesian framework for categorizing multimodal sensory signals such as audio, visual, and haptic information by robots, is introduced at first. The robot uses its physical body to grasp and observe an object from various viewpoints as well as listen to the sound during the observation. The basic algorithm for intelligence is to categorize the collected multimodal data so that the robot can infer unobserved information better and we call the generated categorizes as multimodal concepts. The latter half of this talk discusses an integrated computational model of human intelligence from the motor control to the high level cognition. The core idea is to integrate the multimodal concepts and reinforcement learning. Furthermore, this talk attempts to model communication within the same framework since the self-other discrimination process can be seen as the multimodal categorization of sensory-motor signals.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"47 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130445540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continuous Multi-Modal Interaction Causes Human-Robot Alignment 连续多模态交互导致人机对齐
Pub Date : 2017-10-17 DOI: 10.1145/3125739.3132599
Sebastian Wallkötter, Michael Joannou, Samuel Westlake, Tony Belpaeme
This study explores the effect of continuous interaction with a multi-modal robot on alignment in user dialogue. A game application of `20 Questions' was developed for a SoftBank Robotics NAO robot with supporting gestures, and a study was carried out in which subjects played a number of games. The robot's confidence of speech comprehension was logged and used to analyse the similarity between application legal dialogue and user speech. It was found that subjects significantly aligned their dialogue to the robot throughout continuous, multi-modal interaction.
本研究探讨了与多模态机器人的持续交互对用户对话对齐的影响。为SoftBank Robotics的NAO机器人开发了一个带有支持手势的“20个问题”游戏应用程序,并进行了一项研究,在该研究中,受试者玩了许多游戏。记录了机器人的语音理解自信,并用于分析应用程序法律对话与用户语音之间的相似度。研究发现,在整个连续的、多模态的交互过程中,受试者的对话明显与机器人保持一致。
{"title":"Continuous Multi-Modal Interaction Causes Human-Robot Alignment","authors":"Sebastian Wallkötter, Michael Joannou, Samuel Westlake, Tony Belpaeme","doi":"10.1145/3125739.3132599","DOIUrl":"https://doi.org/10.1145/3125739.3132599","url":null,"abstract":"This study explores the effect of continuous interaction with a multi-modal robot on alignment in user dialogue. A game application of `20 Questions' was developed for a SoftBank Robotics NAO robot with supporting gestures, and a study was carried out in which subjects played a number of games. The robot's confidence of speech comprehension was logged and used to analyse the similarity between application legal dialogue and user speech. It was found that subjects significantly aligned their dialogue to the robot throughout continuous, multi-modal interaction.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128869664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Modeling Player Activity in a Physical Interactive Robot Game Scenario 在物理交互机器人游戏场景中建模玩家活动
Pub Date : 2017-10-17 DOI: 10.1145/3125739.3132608
E. S. Oliveira, Davide Orrù, T. Nascimento, Andrea Bonarini
We propose a quantitative human player model for Physically Interactive RoboGames that can account for the combination of the player activity (physical effort) and interaction level. The model is based on activity recognition and a description of the player interaction (proximity and body contraction index) with the robot co-player. Our approach has been tested on a dataset collected from a real, physical robot game, where activity patterns extracted by a custom 3-axis accelerometer sensor module and by the Microsoft Kinect sensor are used. The proposed model design aims at inspiring approaches that can consider the activity of a human player in lively games against robots and foster the design of robotic adaptive behavior capable of supporting her/his engagement in such type of games.
我们为物理互动机器人游戏提出了一个定量的人类玩家模型,该模型可以解释玩家活动(物理努力)和互动水平的组合。该模型基于活动识别和玩家与机器人搭档的互动描述(接近度和身体收缩指数)。我们的方法已经在一个真实的机器人游戏数据集上进行了测试,其中使用了自定义的3轴加速度计传感器模块和微软Kinect传感器提取的活动模式。所提出的模型设计旨在启发方法,可以考虑人类玩家在与机器人的生动游戏中的活动,并促进机器人自适应行为的设计,能够支持她/他参与此类游戏。
{"title":"Modeling Player Activity in a Physical Interactive Robot Game Scenario","authors":"E. S. Oliveira, Davide Orrù, T. Nascimento, Andrea Bonarini","doi":"10.1145/3125739.3132608","DOIUrl":"https://doi.org/10.1145/3125739.3132608","url":null,"abstract":"We propose a quantitative human player model for Physically Interactive RoboGames that can account for the combination of the player activity (physical effort) and interaction level. The model is based on activity recognition and a description of the player interaction (proximity and body contraction index) with the robot co-player. Our approach has been tested on a dataset collected from a real, physical robot game, where activity patterns extracted by a custom 3-axis accelerometer sensor module and by the Microsoft Kinect sensor are used. The proposed model design aims at inspiring approaches that can consider the activity of a human player in lively games against robots and foster the design of robotic adaptive behavior capable of supporting her/his engagement in such type of games.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123621031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Endocrinological Responses to a New Interactive HMI for a Straddle-type Vehicle: A Pilot Study 对跨坐式交通工具新型交互式人机界面的内分泌反应:一项试点研究
Pub Date : 2017-10-17 DOI: 10.1145/3125739.3132588
Takashi Suegami, H. Sumioka, Fuminao Obayashi, Kyonosuke Ichii, Yoshinori Harada, Hiroshi Daimoto, A. Nakae, H. Ishiguro
This paper hypothesized that a straddle-type vehicle (e.g., a motorcycle) would be a suitable platform for haptic human-machine interactions that elicits affective responses or positive modulations of human emotion. Based on this idea, a new human-machine interface (HMI) for a straddle-type vehicle was proposed for haptically interacting with a rider, together with other visual (design), tactile (texture and heat), and auditory features (sound). We investigated endocrine changes after playing a riding simulator with either new interactive HMI or typical HMI. The results showed, in comparison with the typical HMI, a significant decrease of salivary cortisol level was found after riding the interactive HMI. Salivary testosterone also tended to be reduced after riding the interactive HMI, with significant reduce in salivary DHEA. The results demonstrated that haptic interaction from a vehicle, as we hypothesized, can endocrinologically influence a rider and then may mitigate rider's stress and aggression.
本文假设跨座式交通工具(如摩托车)将是一个合适的人机触觉交互平台,可以引发情感反应或积极调节人类情绪。基于这一思想,提出了一种新的跨座式车辆人机界面(HMI),用于与骑手进行触觉交互,以及其他视觉(设计),触觉(纹理和热量)和听觉特征(声音)。我们研究了使用新的交互式人机界面或典型人机界面玩骑行模拟器后的内分泌变化。结果显示,与典型HMI相比,乘坐交互式HMI后唾液皮质醇水平显著降低。乘坐交互式HMI后,唾液睾酮也趋于降低,唾液DHEA显著降低。结果表明,正如我们假设的那样,来自车辆的触觉交互可以从内分泌上影响骑乘者,然后可能减轻骑乘者的压力和攻击性。
{"title":"Endocrinological Responses to a New Interactive HMI for a Straddle-type Vehicle: A Pilot Study","authors":"Takashi Suegami, H. Sumioka, Fuminao Obayashi, Kyonosuke Ichii, Yoshinori Harada, Hiroshi Daimoto, A. Nakae, H. Ishiguro","doi":"10.1145/3125739.3132588","DOIUrl":"https://doi.org/10.1145/3125739.3132588","url":null,"abstract":"This paper hypothesized that a straddle-type vehicle (e.g., a motorcycle) would be a suitable platform for haptic human-machine interactions that elicits affective responses or positive modulations of human emotion. Based on this idea, a new human-machine interface (HMI) for a straddle-type vehicle was proposed for haptically interacting with a rider, together with other visual (design), tactile (texture and heat), and auditory features (sound). We investigated endocrine changes after playing a riding simulator with either new interactive HMI or typical HMI. The results showed, in comparison with the typical HMI, a significant decrease of salivary cortisol level was found after riding the interactive HMI. Salivary testosterone also tended to be reduced after riding the interactive HMI, with significant reduce in salivary DHEA. The results demonstrated that haptic interaction from a vehicle, as we hypothesized, can endocrinologically influence a rider and then may mitigate rider's stress and aggression.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129165564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Mediation Effect of Mental Alertness for Expressive Lights: Preliminary Results of LED Light Animations on Intention to Buy Hedonic Products and Choose between Healthy and Unhealthy Food 探索表达光对心理警觉性的中介作用:LED灯光动画对快乐产品购买意愿和健康与不健康食品选择的初步结果
Pub Date : 2017-10-17 DOI: 10.1145/3125739.3132598
Sichao Song, S. Yamada
Expressive light has been explored in a handful of previous studies as a means for robots, especially appearance- constrained robots that are not able to employ human-like expressions, to convey internal states and interact with people. However, it is still unknown how different light expressions can affect a person's perception and behavior. In this poster, we explore this research question by studying the effects of different expressive light animations on people's intention to buy hedonic products and how they choose between healthy and unhealthy food. Our preliminary results show that participants assigned to a positive and low arousal light animation condition had a higher intention of purchasing hedonic products and were inclined to choose unhealthy over healthy food. Such findings are in line with previous literature in marketing research, suggesting that mental alertness mediates the effect of external stimuli on a person's behavioral intentions. Future work is thus required to evaluate such findings in a human-robot interaction context.
在之前的一些研究中,表现力光已经被作为机器人的一种手段进行了探索,尤其是那些不能使用类似人类表情的外表受限的机器人,它们可以传达内部状态并与人互动。然而,目前还不清楚不同的光表达如何影响一个人的感知和行为。在这张海报中,我们通过研究不同的表现力光动画对人们购买享乐产品的意愿的影响,以及他们如何选择健康和不健康的食品来探索这个研究问题。我们的初步研究结果表明,被试被分配到积极和低唤醒的光动画条件下,他们有更高的购买享乐产品的意愿,并且倾向于选择不健康的食品而不是健康的食品。这些发现与先前的市场研究文献一致,表明精神警觉性调节外部刺激对人的行为意图的影响。因此,未来的工作需要在人机交互环境中评估这些发现。
{"title":"Exploring Mediation Effect of Mental Alertness for Expressive Lights: Preliminary Results of LED Light Animations on Intention to Buy Hedonic Products and Choose between Healthy and Unhealthy Food","authors":"Sichao Song, S. Yamada","doi":"10.1145/3125739.3132598","DOIUrl":"https://doi.org/10.1145/3125739.3132598","url":null,"abstract":"Expressive light has been explored in a handful of previous studies as a means for robots, especially appearance- constrained robots that are not able to employ human-like expressions, to convey internal states and interact with people. However, it is still unknown how different light expressions can affect a person's perception and behavior. In this poster, we explore this research question by studying the effects of different expressive light animations on people's intention to buy hedonic products and how they choose between healthy and unhealthy food. Our preliminary results show that participants assigned to a positive and low arousal light animation condition had a higher intention of purchasing hedonic products and were inclined to choose unhealthy over healthy food. Such findings are in line with previous literature in marketing research, suggesting that mental alertness mediates the effect of external stimuli on a person's behavioral intentions. Future work is thus required to evaluate such findings in a human-robot interaction context.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128880974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards the Analysis of Movement Variability in Human-Humanoid Imitation Activities 仿人活动的运动变异性分析
Pub Date : 2017-10-17 DOI: 10.1145/3125739.3132595
Miguel P. Xochicale, Chris Baber
In this paper, we present preliminary results for the analysis of movement variability in human-humanoid imitation activities. We applied the state space reconstruction's theorem which help us to have better understanding of the movement variability than other techniques in time or frequency domains. In our experiments, we tested our hypothesis where participants, even performing the same arm movement, presented slight differences in the way they moved. With this in mind, we asked eighteen participants to copy NAO's arm movements while we collected data from inertial sensors attached to the participants' wrists and estimated the head pose using the OpenFace framework. With the proposed metric, we found that sixteen out of eighteen participants imitate the robot well by moving their arms symmetrically and by keeping their heads static; two participants however moved their head in a synchronous way even when the robot's head was completely static and two different participants moved their arms asymetrically to the robot. Although the work is in its early stage, we believe that such preliminary results are promising for applications in rehabilitation, sport science, entertainment or education.
在本文中,我们提出了初步的结果,分析运动变异性在仿人活动。我们应用状态空间重构定理帮助我们更好地理解运动变异性比其他技术在时间或频率域。在我们的实验中,我们测试了我们的假设,参与者即使做同样的手臂运动,他们的运动方式也会有轻微的差异。考虑到这一点,我们要求18名参与者复制NAO的手臂运动,同时我们从附着在参与者手腕上的惯性传感器收集数据,并使用OpenFace框架估计头部姿势。根据提出的度量标准,我们发现18名参与者中有16人通过对称地移动手臂和保持头部静止来很好地模仿机器人;然而,即使机器人的头部完全静止,两名参与者也以同步的方式移动他们的头部,另外两名参与者则不对称地向机器人移动他们的手臂。虽然这项工作还处于早期阶段,但我们相信,这些初步结果在康复、体育科学、娱乐或教育方面的应用是有希望的。
{"title":"Towards the Analysis of Movement Variability in Human-Humanoid Imitation Activities","authors":"Miguel P. Xochicale, Chris Baber","doi":"10.1145/3125739.3132595","DOIUrl":"https://doi.org/10.1145/3125739.3132595","url":null,"abstract":"In this paper, we present preliminary results for the analysis of movement variability in human-humanoid imitation activities. We applied the state space reconstruction's theorem which help us to have better understanding of the movement variability than other techniques in time or frequency domains. In our experiments, we tested our hypothesis where participants, even performing the same arm movement, presented slight differences in the way they moved. With this in mind, we asked eighteen participants to copy NAO's arm movements while we collected data from inertial sensors attached to the participants' wrists and estimated the head pose using the OpenFace framework. With the proposed metric, we found that sixteen out of eighteen participants imitate the robot well by moving their arms symmetrically and by keeping their heads static; two participants however moved their head in a synchronous way even when the robot's head was completely static and two different participants moved their arms asymetrically to the robot. Although the work is in its early stage, we believe that such preliminary results are promising for applications in rehabilitation, sport science, entertainment or education.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131330660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Graphical Digital Personal Assistant that Grounds and Learns Autonomously 一个图形数字个人助理的基础和自主学习
Pub Date : 2017-10-17 DOI: 10.1145/3125739.3132592
C. Kennington, Aprajita Shukla
We present a speech-driven digital personal assistant that is robust despite little or no training data and autonomously improves as it interacts with users. The system is able to establish and build common ground between itself and users by signaling understanding and by learning a mapping via interaction between the words that users actually speak and the system actions. We evaluated our system with real users and found an overall positive response. We further show through objective measures that autonomous learning improves performance in a simple itinerary filling task.
我们提出了一个语音驱动的数字个人助理,尽管很少或没有训练数据,但它很强大,并在与用户交互时自主改进。系统能够通过信号理解和通过用户实际说的话和系统动作之间的交互学习映射,在自身和用户之间建立和建立共同点。我们与真实用户一起评估了我们的系统,并发现了总体的积极反应。我们进一步通过客观测量表明,自主学习提高了简单的行程填写任务的表现。
{"title":"A Graphical Digital Personal Assistant that Grounds and Learns Autonomously","authors":"C. Kennington, Aprajita Shukla","doi":"10.1145/3125739.3132592","DOIUrl":"https://doi.org/10.1145/3125739.3132592","url":null,"abstract":"We present a speech-driven digital personal assistant that is robust despite little or no training data and autonomously improves as it interacts with users. The system is able to establish and build common ground between itself and users by signaling understanding and by learning a mapping via interaction between the words that users actually speak and the system actions. We evaluated our system with real users and found an overall positive response. We further show through objective measures that autonomous learning improves performance in a simple itinerary filling task.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128866424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Active Perception based on Energy Minimization in Multimodal Human-robot Interaction 基于能量最小化的多模态人机交互主动感知
Pub Date : 2017-10-17 DOI: 10.1145/3125739.3125757
Takato Horii, Y. Nagai, M. Asada
Humans use various types of modalities to express own internal states. If a robot interacting with humans can pay attention to limited signals, it should select more informative ones to estimate the partners' states. We propose an active perception method that controls the robot's attention based on an energy minimization criterion. An energy-based model, which has learned to estimate the latent state from sensory signals, calculates energy values corresponding to occurrence probabilities of the signals; The lower the energy is, the higher the likelihood of them. Our method therefore selects the modality that provides the lowest expectation energy among available ones to exploit more frequent experiences. We employed a multimodal deep belief network to represent relationships between humans' states and expressions. Our method demonstrated better performance for the modality selection than other methods in a task of emotion estimation. We discuss the potential of our method to advance human-robot interaction.
人类使用各种类型的模态来表达自己的内部状态。如果一个与人类交互的机器人能够关注有限的信号,它应该选择更多的信息来估计伙伴的状态。提出了一种基于能量最小化准则控制机器人注意力的主动感知方法。一个基于能量的模型,学会了从感觉信号中估计潜在状态,计算信号发生概率对应的能量值;能量越低,它们出现的可能性就越高。因此,我们的方法在可用的模式中选择提供最低期望能量的模式来利用更频繁的体验。我们使用了一个多模态深度信念网络来表示人类状态和表情之间的关系。在情绪估计任务中,我们的方法表现出比其他方法更好的情态选择性能。我们讨论了我们的方法在推进人机交互方面的潜力。
{"title":"Active Perception based on Energy Minimization in Multimodal Human-robot Interaction","authors":"Takato Horii, Y. Nagai, M. Asada","doi":"10.1145/3125739.3125757","DOIUrl":"https://doi.org/10.1145/3125739.3125757","url":null,"abstract":"Humans use various types of modalities to express own internal states. If a robot interacting with humans can pay attention to limited signals, it should select more informative ones to estimate the partners' states. We propose an active perception method that controls the robot's attention based on an energy minimization criterion. An energy-based model, which has learned to estimate the latent state from sensory signals, calculates energy values corresponding to occurrence probabilities of the signals; The lower the energy is, the higher the likelihood of them. Our method therefore selects the modality that provides the lowest expectation energy among available ones to exploit more frequent experiences. We employed a multimodal deep belief network to represent relationships between humans' states and expressions. Our method demonstrated better performance for the modality selection than other methods in a task of emotion estimation. We discuss the potential of our method to advance human-robot interaction.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"18 2 Suppl 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131162734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
Proceedings of the 5th International Conference on Human Agent Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1