首页 > 最新文献

2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)最新文献

英文 中文
Verbal conversation system for a socially embedded robot partner using emotional model 基于情感模型的嵌入式机器人伴侣语言会话系统
Jinseok Woo, János Botzheim, N. Kubota
This paper proposes a verbal conversation system for a robot partner using emotional model. The robot partner calculates its emotional state based on the utterance sentence of the human. Then, the robot partner can control its utterance sentence based on the emotional parameters. As a results, the robot partner can interact with human emotionally naturally. In this paper, we explain the three parts of the conversation system's structure. The first part is time dependent selection based on the database contents. In this mode, the robot tells timely important contents, for example schedules. The mood parameter is used to change the sentence in this mode. The second component is utterance flow learning to select the utterance contents. The robot selects utterance sentence based on the utterance flow information and using its mood value as well. The third component is sentence building based on predefined rules. The rules include personality model of the robot partner. In this paper, we use emotional parameters based on the human sentences to make a natural communication system. Finally, we show experimental results of the proposed method, and conclude the paper. The future research for improving the robot partner system is discussed as well.
本文提出了一种基于情感模型的机器人伴侣语言对话系统。机器人伙伴根据人类的话语句子计算自己的情绪状态。然后,机器人伙伴可以根据情感参数控制自己的话语句子。因此,机器人伴侣可以自然地与人类进行情感互动。在本文中,我们解释了会话系统的结构的三个部分。第一部分是基于数据库内容的时变选择。在这种模式下,机器人会及时告知重要的内容,例如日程安排。在这种模式下,mood参数用于改变句子。第二部分是话语流学习,选择话语内容。机器人根据话语流信息并利用其情绪值来选择话语句子。第三部分是基于预定义规则的造句。规则包括机器人搭档的性格模型。在本文中,我们使用基于人类句子的情感参数来构建一个自然的交流系统。最后给出了该方法的实验结果,并对本文进行了总结。最后,对今后改进机器人伙伴系统的研究进行了展望。
{"title":"Verbal conversation system for a socially embedded robot partner using emotional model","authors":"Jinseok Woo, János Botzheim, N. Kubota","doi":"10.1109/ROMAN.2015.7333685","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333685","url":null,"abstract":"This paper proposes a verbal conversation system for a robot partner using emotional model. The robot partner calculates its emotional state based on the utterance sentence of the human. Then, the robot partner can control its utterance sentence based on the emotional parameters. As a results, the robot partner can interact with human emotionally naturally. In this paper, we explain the three parts of the conversation system's structure. The first part is time dependent selection based on the database contents. In this mode, the robot tells timely important contents, for example schedules. The mood parameter is used to change the sentence in this mode. The second component is utterance flow learning to select the utterance contents. The robot selects utterance sentence based on the utterance flow information and using its mood value as well. The third component is sentence building based on predefined rules. The rules include personality model of the robot partner. In this paper, we use emotional parameters based on the human sentences to make a natural communication system. Finally, we show experimental results of the proposed method, and conclude the paper. The future research for improving the robot partner system is discussed as well.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127514663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Real time object tracking via a mixture model 通过混合模型进行实时目标跟踪
Dongxu Gao, Zhaojie Ju, Jiangtao Cao, Honghai Liu
Object tracking has been applied in many fields such as intelligent surveillance and computer vision. Although much progress has been made, there are still many puzzles which pose a huge challenge to object tracking. Currently, the problems are mainly caused by appearance model as well as real-time performance. A novel method was been proposed in this paper to handle both of these problems. Locally dense contexts feature and image information (i.e. the relationship between the object and its surrounding regions) are combined in a Bayes framework. Then the tracking problem can be seen as a prediction question which need to compute the posterior probability. Both scale variations and temple updating are considered in the proposed algorithm to assure the effectiveness. To make the algorithm runs in a real time system, a Fourier Transform (FT) is used when solving the Bayes equation. Therefore, the MMOT (Mixture model for object tracking) runs in real-time and performs better than state-of-the-art algorithms on some challenging image sequences in terms of accuracy, quickness and robustness.
目标跟踪技术在智能监控、计算机视觉等领域有着广泛的应用。尽管已经取得了很大的进展,但仍然存在许多难题,给目标跟踪带来了巨大的挑战。目前,问题主要集中在外观模型和实时性方面。本文提出了一种新的方法来处理这两个问题。局部密集上下文特征和图像信息(即物体与其周围区域之间的关系)在贝叶斯框架中结合。这样跟踪问题就可以看作是一个需要计算后验概率的预测问题。为了保证算法的有效性,该算法同时考虑了尺度变化和神庙更新。为了使该算法在实时系统中运行,在求解贝叶斯方程时使用了傅里叶变换。因此,MMOT(混合目标跟踪模型)是实时运行的,并且在一些具有挑战性的图像序列上,在准确性、快速性和鲁棒性方面比最先进的算法表现得更好。
{"title":"Real time object tracking via a mixture model","authors":"Dongxu Gao, Zhaojie Ju, Jiangtao Cao, Honghai Liu","doi":"10.1109/ROMAN.2015.7333701","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333701","url":null,"abstract":"Object tracking has been applied in many fields such as intelligent surveillance and computer vision. Although much progress has been made, there are still many puzzles which pose a huge challenge to object tracking. Currently, the problems are mainly caused by appearance model as well as real-time performance. A novel method was been proposed in this paper to handle both of these problems. Locally dense contexts feature and image information (i.e. the relationship between the object and its surrounding regions) are combined in a Bayes framework. Then the tracking problem can be seen as a prediction question which need to compute the posterior probability. Both scale variations and temple updating are considered in the proposed algorithm to assure the effectiveness. To make the algorithm runs in a real time system, a Fourier Transform (FT) is used when solving the Bayes equation. Therefore, the MMOT (Mixture model for object tracking) runs in real-time and performs better than state-of-the-art algorithms on some challenging image sequences in terms of accuracy, quickness and robustness.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"186 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125837071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-modal sensing for human activity recognition 人类活动识别的多模态传感
Barbara Bruno, Jasmin Grosinger, F. Mastrogiovanni, F. Pecora, A. Saffiotti, Subhash Sathyakeerthy, A. Sgorbissa
Robots for the elderly are a particular category of home assistive robots, helping people in the execution of daily life tasks to extend their independent life. Such robots should be able to determine the level of independence of the user and track its evolution over time, to adapt the assistance to the person capabilities and needs. Human Activity Recognition systems employ various sensing strategies, relying on environmental or wearable sensors, to recognize the daily life activities which provide insights on the health status of a person. The main contribution of the article is the design of an heterogeneous information management framework, allowing for the description of a wide variety of human activities in terms of multi-modal environmental and wearable sensing data and providing accurate knowledge about the user activity to any assistive robot.
老年人机器人是家庭辅助机器人的一个特殊类别,帮助人们执行日常生活任务,延长他们的独立生活。这样的机器人应该能够确定用户的独立水平,并跟踪其随时间的演变,以适应人的能力和需求的援助。人类活动识别系统采用各种传感策略,依靠环境或可穿戴传感器来识别日常生活活动,从而提供对人的健康状况的见解。本文的主要贡献是设计了一个异构信息管理框架,允许在多模态环境和可穿戴传感数据方面描述各种各样的人类活动,并为任何辅助机器人提供有关用户活动的准确知识。
{"title":"Multi-modal sensing for human activity recognition","authors":"Barbara Bruno, Jasmin Grosinger, F. Mastrogiovanni, F. Pecora, A. Saffiotti, Subhash Sathyakeerthy, A. Sgorbissa","doi":"10.1109/ROMAN.2015.7333653","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333653","url":null,"abstract":"Robots for the elderly are a particular category of home assistive robots, helping people in the execution of daily life tasks to extend their independent life. Such robots should be able to determine the level of independence of the user and track its evolution over time, to adapt the assistance to the person capabilities and needs. Human Activity Recognition systems employ various sensing strategies, relying on environmental or wearable sensors, to recognize the daily life activities which provide insights on the health status of a person. The main contribution of the article is the design of an heterogeneous information management framework, allowing for the description of a wide variety of human activities in terms of multi-modal environmental and wearable sensing data and providing accurate knowledge about the user activity to any assistive robot.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115222503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Long-term knowledge acquisition in a memory-based epigenetic robot architecture for verbal interaction 基于记忆的表观遗传语言交互机器人结构的长期知识获取
F. Pratama, F. Mastrogiovanni, Sungmoon Jeong, N. Chong
We present a robot cognitive framework based on (a) a memory-like architecture; and (b) the notion of “context”. We posit that relying solely on machine learning techniques may not be the right approach for a long-term, continuous knowledge acquisition. Since we are interested in long-term human-robot interaction, we focus on a scenario where a robot “remembers” relevant events happening in the environment. By visually sensing its surroundings, the robot is expected to infer and remember snapshots of events, and recall specific past events based on inputs and contextual information from humans. Using a COTS vision frameworks for the experiment, we show that the robot is able to form “memories” and recall related events based on cues and the context given during the human-robot interaction process.
我们提出了一个基于(a)类记忆架构的机器人认知框架;(b)“语境”的概念。我们认为,仅仅依靠机器学习技术可能不是长期、持续获取知识的正确方法。由于我们对长期的人机交互感兴趣,我们关注的是机器人“记住”环境中发生的相关事件的场景。通过视觉感知周围环境,机器人有望推断和记住事件的快照,并根据人类的输入和上下文信息回忆起特定的过去事件。使用COTS视觉框架进行实验,我们证明机器人能够形成“记忆”,并根据人机交互过程中给出的线索和上下文回忆相关事件。
{"title":"Long-term knowledge acquisition in a memory-based epigenetic robot architecture for verbal interaction","authors":"F. Pratama, F. Mastrogiovanni, Sungmoon Jeong, N. Chong","doi":"10.1109/ROMAN.2015.7333563","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333563","url":null,"abstract":"We present a robot cognitive framework based on (a) a memory-like architecture; and (b) the notion of “context”. We posit that relying solely on machine learning techniques may not be the right approach for a long-term, continuous knowledge acquisition. Since we are interested in long-term human-robot interaction, we focus on a scenario where a robot “remembers” relevant events happening in the environment. By visually sensing its surroundings, the robot is expected to infer and remember snapshots of events, and recall specific past events based on inputs and contextual information from humans. Using a COTS vision frameworks for the experiment, we show that the robot is able to form “memories” and recall related events based on cues and the context given during the human-robot interaction process.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122408354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Anomaly state assessing of human using walker-type support system based on statistical analysis 基于统计分析的人使用助行式支撑系统的异常状态评估
Y. Hirata, Hiroki Yamaya, K. Kosuge, Atsushi Koujina, T. Shirakawa, Takahiro Katayama
In this paper, we propose a method to assess an extent of anomaly state of human using a walker-type support system. The elderly and the handicapped people use the walker-type support system to keep their balance and support their weight. Although the walker-type support system is easy to move based on the applied force of the user, several accidents such as falling and colliding with the obstacle have been reported. The anomaly state that causes a severe injury of the user should be detected before accident and the walker-type support system should prevent such accidents. In this paper, we focus on assessing the extent of the anomaly state of the user based on the statistical analysis of the applied force of the user. This research models the applied force of the user in real time by using the Gaussian Mixture Model (GMM), and we determine each parameter of GMM statistically. In addition, we assess the extent of the anomaly state of the user by using the Hellinger score, which can compare the data set of the normal state with that of anomaly state. The proposed method is applied to developed walker-type support system with simple force sensor, and we conduct the experiments in the several walking states and the environmental conditions.
本文提出了一种利用助行式支撑系统评估人体异常状态程度的方法。老年人和残疾人使用助行器式支撑系统来保持平衡和支撑体重。虽然助行式支撑系统很容易根据使用者施加的力移动,但已经有几起事故的报道,如坠落和与障碍物碰撞。对使用者造成严重伤害的异常状态应在事故发生前检测出来,助行式支撑系统应防止此类事故的发生。在本文中,我们着重于基于对用户施加力的统计分析来评估用户异常状态的程度。本研究利用高斯混合模型(Gaussian Mixture Model, GMM)对用户的施加力进行实时建模,并对GMM的各个参数进行统计确定。此外,我们利用Hellinger分数来评估用户异常状态的程度,该分数可以将正常状态的数据集与异常状态的数据集进行比较。将所提出的方法应用于已开发的具有简单力传感器的步行式支撑系统,并在几种步行状态和环境条件下进行了实验。
{"title":"Anomaly state assessing of human using walker-type support system based on statistical analysis","authors":"Y. Hirata, Hiroki Yamaya, K. Kosuge, Atsushi Koujina, T. Shirakawa, Takahiro Katayama","doi":"10.1109/ROMAN.2015.7333681","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333681","url":null,"abstract":"In this paper, we propose a method to assess an extent of anomaly state of human using a walker-type support system. The elderly and the handicapped people use the walker-type support system to keep their balance and support their weight. Although the walker-type support system is easy to move based on the applied force of the user, several accidents such as falling and colliding with the obstacle have been reported. The anomaly state that causes a severe injury of the user should be detected before accident and the walker-type support system should prevent such accidents. In this paper, we focus on assessing the extent of the anomaly state of the user based on the statistical analysis of the applied force of the user. This research models the applied force of the user in real time by using the Gaussian Mixture Model (GMM), and we determine each parameter of GMM statistically. In addition, we assess the extent of the anomaly state of the user by using the Hellinger score, which can compare the data set of the normal state with that of anomaly state. The proposed method is applied to developed walker-type support system with simple force sensor, and we conduct the experiments in the several walking states and the environmental conditions.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129240739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Calligraphy-stroke learning support system using projection 使用投影的书法笔画学习支持系统
Masashi Narita, T. Matsumaru
In this paper, a calligraphy learning support system is presented for supporting brushwork learning by using a projector. The system was designed to provide the three kinds of training according to the learner's ability as followings: copying training, tracing training, and combination of them. In order to instruct the three-dimensional brushwork such as the writing speed, pressure, and orientation of the brush, we proposed the instruction method by presenting the information to only brush tip. This method can be visualized a brush position and the orientation. In addition, the copying experiment was performed using the proposed method. As a result, the efficiency of the proposed method was examined through experiment.
本文提出了一种利用投影仪支持书法学习的书法学习辅助系统。该系统根据学习者的能力提供三种训练:模仿训练、追踪训练和组合训练。为了指导笔尖的书写速度、笔尖的压力、笔尖的方向等三维笔法,我们提出了只向笔尖显示信息的指导方法。这种方法可以可视化一个笔刷的位置和方向。并利用该方法进行了复制实验。最后,通过实验验证了该方法的有效性。
{"title":"Calligraphy-stroke learning support system using projection","authors":"Masashi Narita, T. Matsumaru","doi":"10.1109/ROMAN.2015.7333576","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333576","url":null,"abstract":"In this paper, a calligraphy learning support system is presented for supporting brushwork learning by using a projector. The system was designed to provide the three kinds of training according to the learner's ability as followings: copying training, tracing training, and combination of them. In order to instruct the three-dimensional brushwork such as the writing speed, pressure, and orientation of the brush, we proposed the instruction method by presenting the information to only brush tip. This method can be visualized a brush position and the orientation. In addition, the copying experiment was performed using the proposed method. As a result, the efficiency of the proposed method was examined through experiment.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123911755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Combined kinesthetic and simulated interface for teaching robot motion models 结合动觉与仿真界面的机器人运动模型教学
Elizabeth Cha, Klas Kronander, A. Billard
The success of a Learning from Demonstration system depends on the quality of the demonstrated data. Kinesthetic demonstrations are often assumed to be the best method of providing demonstrations for manipulation tasks, however, there is little research to support this. In this work, we explore the use of a simulated environment as an alternative to and in combination with kinesthetic demonstrations when using an autonomous dynamical system to encode motion. We present the results of a user study comparing three demonstrations interfaces for a manipulation task on a KUKA LWR robot.
从演示中学习系统的成功取决于演示数据的质量。动觉演示通常被认为是为操作任务提供演示的最佳方法,然而,很少有研究支持这一点。在这项工作中,我们探索了在使用自主动力系统编码运动时,将模拟环境作为替代并与动觉演示相结合的使用。我们提出了一项用户研究的结果,比较了KUKA LWR机器人上操作任务的三个演示界面。
{"title":"Combined kinesthetic and simulated interface for teaching robot motion models","authors":"Elizabeth Cha, Klas Kronander, A. Billard","doi":"10.1109/ROMAN.2015.7333655","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333655","url":null,"abstract":"The success of a Learning from Demonstration system depends on the quality of the demonstrated data. Kinesthetic demonstrations are often assumed to be the best method of providing demonstrations for manipulation tasks, however, there is little research to support this. In this work, we explore the use of a simulated environment as an alternative to and in combination with kinesthetic demonstrations when using an autonomous dynamical system to encode motion. We present the results of a user study comparing three demonstrations interfaces for a manipulation task on a KUKA LWR robot.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132491304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Uncovering emotional memories in robot soccer players 揭示机器人足球运动员的情感记忆
Christopher Allan, M. Couceiro, P. A. Vargas
Memory is central to the emotional experience of playing sports. The capacity to recall great achievements, triumphs and defeats inevitably influences the emotional state of athletes and people in general. Nevertheless, research on robot competitions that has been striving to mimic real-world soccer, such as the well-known RoboCup challenge, never considered the relevance of memory and emotions, nor their possible connection. This paper proposes a data mining approach to emotional memory modelling with the purpose of replicating the link between emotion and memory in a Ro-boCup scenario. A model of emotional fluctuations is also proposed based on neurological disorders to investigate their effect on the robot's ability to choose appropriate behaviours. The proposed model is evaluated using the NAO robot on a simulation environment. By utilizing emotion to assess memories stored, NAO was able to successfully choose behaviours based on the optimal outcomes achieved in the past.
记忆是运动情感体验的核心。回忆伟大成就、胜利和失败的能力不可避免地会影响运动员和一般人的情绪状态。然而,对机器人比赛的研究一直在努力模仿现实世界的足球,比如著名的机器人世界杯挑战赛,从未考虑过记忆和情感的相关性,也没有考虑过它们之间可能的联系。本文提出了一种情感记忆建模的数据挖掘方法,目的是在Ro-boCup场景中复制情感和记忆之间的联系。此外,还提出了一种基于神经系统疾病的情绪波动模型,以研究它们对机器人选择适当行为的能力的影响。利用NAO机器人在仿真环境中对所提出的模型进行了评估。通过利用情绪来评估储存的记忆,NAO能够成功地根据过去取得的最佳结果选择行为。
{"title":"Uncovering emotional memories in robot soccer players","authors":"Christopher Allan, M. Couceiro, P. A. Vargas","doi":"10.1109/ROMAN.2015.7333599","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333599","url":null,"abstract":"Memory is central to the emotional experience of playing sports. The capacity to recall great achievements, triumphs and defeats inevitably influences the emotional state of athletes and people in general. Nevertheless, research on robot competitions that has been striving to mimic real-world soccer, such as the well-known RoboCup challenge, never considered the relevance of memory and emotions, nor their possible connection. This paper proposes a data mining approach to emotional memory modelling with the purpose of replicating the link between emotion and memory in a Ro-boCup scenario. A model of emotional fluctuations is also proposed based on neurological disorders to investigate their effect on the robot's ability to choose appropriate behaviours. The proposed model is evaluated using the NAO robot on a simulation environment. By utilizing emotion to assess memories stored, NAO was able to successfully choose behaviours based on the optimal outcomes achieved in the past.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130906253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Dynamics of social positioning patterns in group-robot interactions 群体机器人互动中社会定位模式的动态
J. Vroon, M. Joosse, M. Lohse, Jan Kolkmeier, Jaebok Kim, K. Truong, G. Englebienne, D. Heylen, V. Evers
When a mobile robot interacts with a group of people, it has to consider its position and orientation. We introduce a novel study aimed at generating hypotheses on suitable behavior for such social positioning, explicitly focusing on interaction with small groups of users and allowing for the temporal and social dynamics inherent in most interactions. In particular, the interactions we look at are approach, converse and retreat. In this study, groups of three participants and a telepresence robot (controlled remotely by a fourth participant) solved a task together while we collected quantitative and qualitative data, including tracking of positioning/orientation and ratings of the behaviors used. In the data we observed a variety of patterns that can be extrapolated to hypotheses using inductive reasoning. One such pattern/hypothesis is that a (telepresence) robot could pass through a group when retreating, without this affecting how comfortable that retreat is for the group members. Another is that a group will rate the position/orientation of a (telepresence) robot as more comfortable when it is aimed more at the center of that group.
当移动机器人与一群人互动时,它必须考虑自己的位置和方向。我们介绍了一项新的研究,旨在为这种社会定位产生合适的行为假设,明确地关注与小群体用户的交互,并允许大多数交互中固有的时间和社会动态。特别是,我们看到的互动是接近,交谈和撤退。在这项研究中,三名参与者和一个远程呈现机器人(由第四名参与者远程控制)一起解决一个任务,同时我们收集定量和定性数据,包括跟踪定位/方向和使用行为的评级。在数据中,我们观察到各种各样的模式,可以用归纳推理推断出假设。其中一种模式/假设是,(远程呈现)机器人可以在撤退时穿过一个群体,而不会影响群体成员撤退时的舒适程度。另一个原因是,当一个(远程呈现)机器人的位置/方向更多地瞄准该群体的中心时,一个群体会认为它的位置/方向更舒适。
{"title":"Dynamics of social positioning patterns in group-robot interactions","authors":"J. Vroon, M. Joosse, M. Lohse, Jan Kolkmeier, Jaebok Kim, K. Truong, G. Englebienne, D. Heylen, V. Evers","doi":"10.1109/ROMAN.2015.7333633","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333633","url":null,"abstract":"When a mobile robot interacts with a group of people, it has to consider its position and orientation. We introduce a novel study aimed at generating hypotheses on suitable behavior for such social positioning, explicitly focusing on interaction with small groups of users and allowing for the temporal and social dynamics inherent in most interactions. In particular, the interactions we look at are approach, converse and retreat. In this study, groups of three participants and a telepresence robot (controlled remotely by a fourth participant) solved a task together while we collected quantitative and qualitative data, including tracking of positioning/orientation and ratings of the behaviors used. In the data we observed a variety of patterns that can be extrapolated to hypotheses using inductive reasoning. One such pattern/hypothesis is that a (telepresence) robot could pass through a group when retreating, without this affecting how comfortable that retreat is for the group members. Another is that a group will rate the position/orientation of a (telepresence) robot as more comfortable when it is aimed more at the center of that group.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125746176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Proof of concept for a user-centered system for sharing cooperative plan knowledge over extended periods and crew changes in space-flight operations 以用户为中心的系统的概念验证,该系统用于在太空飞行操作中长时间共享合作计划知识和机组人员变化
Marwin Sorce, G. Pointeau, Maxime Petit, Anne-Laure Mealier, G. Gibert, Peter Ford Dominey
With the Robonaut-2 humanoid robot now permanently flying on the ISS, the potential role for robots participating in cooperative activity in space is becoming a reality. Recent research has demonstrated that cooperation in the joint achievement of shared goals is a promising framework for human interaction with robots, with application in space. Perhaps more importantly, with the turn-over of crew members, robots could play an important role in maintaining and transferring expertise between outgoing and incoming crews. In this context, the current research builds on our experience in systems for cooperative human-robot interaction, introducing novel interface and interaction modalities that exploit the long-term experience of the robot. We implement a system where the human agent can teach the Nao humanoid new actions by physical demonstration, visual imitation, and spoken command. These actions can then be composed into joint action plans that coordinate the cooperation between agent and human. We also implement algorithms for an Autobiographical Memory (ABM) that provides access to of all of the robots interaction experience. These functions are assembled in a novel interaction paradigm for the capture, maintenance and transfer of knowledge in a five-tiered structure. The five tiers allow the robot to 1) learn simple behaviors, 2) learn shared plans composed from the learned behaviors, 3) execute the learned shared plans efficiently, 4) teach shared plans to new humans, and 5) answer questions from the human to better understand the origin of the shared plan. Our results demonstrate the feasibility of this system and indicate that such humanoid robot systems will provide a potential mechanism for the accumulation and transfer of knowledge, between humans who are not co-present. Applications to space flight operations as a target scenario are discussed.
随着Robonaut-2人形机器人现在永久地在国际空间站上飞行,机器人参与太空合作活动的潜在作用正在成为现实。最近的研究表明,共同实现共同目标的合作是人类与机器人互动的一个有希望的框架,并在太空中得到应用。也许更重要的是,随着机组人员的更替,机器人可以在即将离任和即将上任的机组人员之间保持和传递专业知识方面发挥重要作用。在这种背景下,目前的研究建立在我们在人机协作系统方面的经验基础上,引入了利用机器人长期经验的新型界面和交互模式。我们实现了一个系统,在这个系统中,人类代理可以通过物理演示、视觉模仿和口头命令来教Nao类人机器人新的动作。然后,这些动作可以组成联合行动计划,协调代理和人之间的合作。我们还实现了自传体记忆(ABM)算法,该算法提供了对所有机器人交互经验的访问。这些功能组合在一个新的交互范式中,以五层结构捕获、维护和转移知识。这五个层次允许机器人1)学习简单的行为,2)学习由学习行为组成的共享计划,3)有效地执行学习到的共享计划,4)向新人类教授共享计划,5)回答人类的问题以更好地理解共享计划的起源。我们的研究结果证明了该系统的可行性,并表明这种类人机器人系统将为不在一起的人类之间的知识积累和转移提供一种潜在的机制。讨论了作为目标情景的空间飞行操作的应用。
{"title":"Proof of concept for a user-centered system for sharing cooperative plan knowledge over extended periods and crew changes in space-flight operations","authors":"Marwin Sorce, G. Pointeau, Maxime Petit, Anne-Laure Mealier, G. Gibert, Peter Ford Dominey","doi":"10.1109/ROMAN.2015.7333565","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333565","url":null,"abstract":"With the Robonaut-2 humanoid robot now permanently flying on the ISS, the potential role for robots participating in cooperative activity in space is becoming a reality. Recent research has demonstrated that cooperation in the joint achievement of shared goals is a promising framework for human interaction with robots, with application in space. Perhaps more importantly, with the turn-over of crew members, robots could play an important role in maintaining and transferring expertise between outgoing and incoming crews. In this context, the current research builds on our experience in systems for cooperative human-robot interaction, introducing novel interface and interaction modalities that exploit the long-term experience of the robot. We implement a system where the human agent can teach the Nao humanoid new actions by physical demonstration, visual imitation, and spoken command. These actions can then be composed into joint action plans that coordinate the cooperation between agent and human. We also implement algorithms for an Autobiographical Memory (ABM) that provides access to of all of the robots interaction experience. These functions are assembled in a novel interaction paradigm for the capture, maintenance and transfer of knowledge in a five-tiered structure. The five tiers allow the robot to 1) learn simple behaviors, 2) learn shared plans composed from the learned behaviors, 3) execute the learned shared plans efficiently, 4) teach shared plans to new humans, and 5) answer questions from the human to better understand the origin of the shared plan. Our results demonstrate the feasibility of this system and indicate that such humanoid robot systems will provide a potential mechanism for the accumulation and transfer of knowledge, between humans who are not co-present. Applications to space flight operations as a target scenario are discussed.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129727448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1