Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333685
Jinseok Woo, János Botzheim, N. Kubota
This paper proposes a verbal conversation system for a robot partner using emotional model. The robot partner calculates its emotional state based on the utterance sentence of the human. Then, the robot partner can control its utterance sentence based on the emotional parameters. As a results, the robot partner can interact with human emotionally naturally. In this paper, we explain the three parts of the conversation system's structure. The first part is time dependent selection based on the database contents. In this mode, the robot tells timely important contents, for example schedules. The mood parameter is used to change the sentence in this mode. The second component is utterance flow learning to select the utterance contents. The robot selects utterance sentence based on the utterance flow information and using its mood value as well. The third component is sentence building based on predefined rules. The rules include personality model of the robot partner. In this paper, we use emotional parameters based on the human sentences to make a natural communication system. Finally, we show experimental results of the proposed method, and conclude the paper. The future research for improving the robot partner system is discussed as well.
{"title":"Verbal conversation system for a socially embedded robot partner using emotional model","authors":"Jinseok Woo, János Botzheim, N. Kubota","doi":"10.1109/ROMAN.2015.7333685","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333685","url":null,"abstract":"This paper proposes a verbal conversation system for a robot partner using emotional model. The robot partner calculates its emotional state based on the utterance sentence of the human. Then, the robot partner can control its utterance sentence based on the emotional parameters. As a results, the robot partner can interact with human emotionally naturally. In this paper, we explain the three parts of the conversation system's structure. The first part is time dependent selection based on the database contents. In this mode, the robot tells timely important contents, for example schedules. The mood parameter is used to change the sentence in this mode. The second component is utterance flow learning to select the utterance contents. The robot selects utterance sentence based on the utterance flow information and using its mood value as well. The third component is sentence building based on predefined rules. The rules include personality model of the robot partner. In this paper, we use emotional parameters based on the human sentences to make a natural communication system. Finally, we show experimental results of the proposed method, and conclude the paper. The future research for improving the robot partner system is discussed as well.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127514663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333701
Dongxu Gao, Zhaojie Ju, Jiangtao Cao, Honghai Liu
Object tracking has been applied in many fields such as intelligent surveillance and computer vision. Although much progress has been made, there are still many puzzles which pose a huge challenge to object tracking. Currently, the problems are mainly caused by appearance model as well as real-time performance. A novel method was been proposed in this paper to handle both of these problems. Locally dense contexts feature and image information (i.e. the relationship between the object and its surrounding regions) are combined in a Bayes framework. Then the tracking problem can be seen as a prediction question which need to compute the posterior probability. Both scale variations and temple updating are considered in the proposed algorithm to assure the effectiveness. To make the algorithm runs in a real time system, a Fourier Transform (FT) is used when solving the Bayes equation. Therefore, the MMOT (Mixture model for object tracking) runs in real-time and performs better than state-of-the-art algorithms on some challenging image sequences in terms of accuracy, quickness and robustness.
{"title":"Real time object tracking via a mixture model","authors":"Dongxu Gao, Zhaojie Ju, Jiangtao Cao, Honghai Liu","doi":"10.1109/ROMAN.2015.7333701","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333701","url":null,"abstract":"Object tracking has been applied in many fields such as intelligent surveillance and computer vision. Although much progress has been made, there are still many puzzles which pose a huge challenge to object tracking. Currently, the problems are mainly caused by appearance model as well as real-time performance. A novel method was been proposed in this paper to handle both of these problems. Locally dense contexts feature and image information (i.e. the relationship between the object and its surrounding regions) are combined in a Bayes framework. Then the tracking problem can be seen as a prediction question which need to compute the posterior probability. Both scale variations and temple updating are considered in the proposed algorithm to assure the effectiveness. To make the algorithm runs in a real time system, a Fourier Transform (FT) is used when solving the Bayes equation. Therefore, the MMOT (Mixture model for object tracking) runs in real-time and performs better than state-of-the-art algorithms on some challenging image sequences in terms of accuracy, quickness and robustness.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"186 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125837071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333653
Barbara Bruno, Jasmin Grosinger, F. Mastrogiovanni, F. Pecora, A. Saffiotti, Subhash Sathyakeerthy, A. Sgorbissa
Robots for the elderly are a particular category of home assistive robots, helping people in the execution of daily life tasks to extend their independent life. Such robots should be able to determine the level of independence of the user and track its evolution over time, to adapt the assistance to the person capabilities and needs. Human Activity Recognition systems employ various sensing strategies, relying on environmental or wearable sensors, to recognize the daily life activities which provide insights on the health status of a person. The main contribution of the article is the design of an heterogeneous information management framework, allowing for the description of a wide variety of human activities in terms of multi-modal environmental and wearable sensing data and providing accurate knowledge about the user activity to any assistive robot.
{"title":"Multi-modal sensing for human activity recognition","authors":"Barbara Bruno, Jasmin Grosinger, F. Mastrogiovanni, F. Pecora, A. Saffiotti, Subhash Sathyakeerthy, A. Sgorbissa","doi":"10.1109/ROMAN.2015.7333653","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333653","url":null,"abstract":"Robots for the elderly are a particular category of home assistive robots, helping people in the execution of daily life tasks to extend their independent life. Such robots should be able to determine the level of independence of the user and track its evolution over time, to adapt the assistance to the person capabilities and needs. Human Activity Recognition systems employ various sensing strategies, relying on environmental or wearable sensors, to recognize the daily life activities which provide insights on the health status of a person. The main contribution of the article is the design of an heterogeneous information management framework, allowing for the description of a wide variety of human activities in terms of multi-modal environmental and wearable sensing data and providing accurate knowledge about the user activity to any assistive robot.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115222503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333563
F. Pratama, F. Mastrogiovanni, Sungmoon Jeong, N. Chong
We present a robot cognitive framework based on (a) a memory-like architecture; and (b) the notion of “context”. We posit that relying solely on machine learning techniques may not be the right approach for a long-term, continuous knowledge acquisition. Since we are interested in long-term human-robot interaction, we focus on a scenario where a robot “remembers” relevant events happening in the environment. By visually sensing its surroundings, the robot is expected to infer and remember snapshots of events, and recall specific past events based on inputs and contextual information from humans. Using a COTS vision frameworks for the experiment, we show that the robot is able to form “memories” and recall related events based on cues and the context given during the human-robot interaction process.
{"title":"Long-term knowledge acquisition in a memory-based epigenetic robot architecture for verbal interaction","authors":"F. Pratama, F. Mastrogiovanni, Sungmoon Jeong, N. Chong","doi":"10.1109/ROMAN.2015.7333563","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333563","url":null,"abstract":"We present a robot cognitive framework based on (a) a memory-like architecture; and (b) the notion of “context”. We posit that relying solely on machine learning techniques may not be the right approach for a long-term, continuous knowledge acquisition. Since we are interested in long-term human-robot interaction, we focus on a scenario where a robot “remembers” relevant events happening in the environment. By visually sensing its surroundings, the robot is expected to infer and remember snapshots of events, and recall specific past events based on inputs and contextual information from humans. Using a COTS vision frameworks for the experiment, we show that the robot is able to form “memories” and recall related events based on cues and the context given during the human-robot interaction process.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122408354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333681
Y. Hirata, Hiroki Yamaya, K. Kosuge, Atsushi Koujina, T. Shirakawa, Takahiro Katayama
In this paper, we propose a method to assess an extent of anomaly state of human using a walker-type support system. The elderly and the handicapped people use the walker-type support system to keep their balance and support their weight. Although the walker-type support system is easy to move based on the applied force of the user, several accidents such as falling and colliding with the obstacle have been reported. The anomaly state that causes a severe injury of the user should be detected before accident and the walker-type support system should prevent such accidents. In this paper, we focus on assessing the extent of the anomaly state of the user based on the statistical analysis of the applied force of the user. This research models the applied force of the user in real time by using the Gaussian Mixture Model (GMM), and we determine each parameter of GMM statistically. In addition, we assess the extent of the anomaly state of the user by using the Hellinger score, which can compare the data set of the normal state with that of anomaly state. The proposed method is applied to developed walker-type support system with simple force sensor, and we conduct the experiments in the several walking states and the environmental conditions.
{"title":"Anomaly state assessing of human using walker-type support system based on statistical analysis","authors":"Y. Hirata, Hiroki Yamaya, K. Kosuge, Atsushi Koujina, T. Shirakawa, Takahiro Katayama","doi":"10.1109/ROMAN.2015.7333681","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333681","url":null,"abstract":"In this paper, we propose a method to assess an extent of anomaly state of human using a walker-type support system. The elderly and the handicapped people use the walker-type support system to keep their balance and support their weight. Although the walker-type support system is easy to move based on the applied force of the user, several accidents such as falling and colliding with the obstacle have been reported. The anomaly state that causes a severe injury of the user should be detected before accident and the walker-type support system should prevent such accidents. In this paper, we focus on assessing the extent of the anomaly state of the user based on the statistical analysis of the applied force of the user. This research models the applied force of the user in real time by using the Gaussian Mixture Model (GMM), and we determine each parameter of GMM statistically. In addition, we assess the extent of the anomaly state of the user by using the Hellinger score, which can compare the data set of the normal state with that of anomaly state. The proposed method is applied to developed walker-type support system with simple force sensor, and we conduct the experiments in the several walking states and the environmental conditions.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129240739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333576
Masashi Narita, T. Matsumaru
In this paper, a calligraphy learning support system is presented for supporting brushwork learning by using a projector. The system was designed to provide the three kinds of training according to the learner's ability as followings: copying training, tracing training, and combination of them. In order to instruct the three-dimensional brushwork such as the writing speed, pressure, and orientation of the brush, we proposed the instruction method by presenting the information to only brush tip. This method can be visualized a brush position and the orientation. In addition, the copying experiment was performed using the proposed method. As a result, the efficiency of the proposed method was examined through experiment.
{"title":"Calligraphy-stroke learning support system using projection","authors":"Masashi Narita, T. Matsumaru","doi":"10.1109/ROMAN.2015.7333576","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333576","url":null,"abstract":"In this paper, a calligraphy learning support system is presented for supporting brushwork learning by using a projector. The system was designed to provide the three kinds of training according to the learner's ability as followings: copying training, tracing training, and combination of them. In order to instruct the three-dimensional brushwork such as the writing speed, pressure, and orientation of the brush, we proposed the instruction method by presenting the information to only brush tip. This method can be visualized a brush position and the orientation. In addition, the copying experiment was performed using the proposed method. As a result, the efficiency of the proposed method was examined through experiment.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123911755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333655
Elizabeth Cha, Klas Kronander, A. Billard
The success of a Learning from Demonstration system depends on the quality of the demonstrated data. Kinesthetic demonstrations are often assumed to be the best method of providing demonstrations for manipulation tasks, however, there is little research to support this. In this work, we explore the use of a simulated environment as an alternative to and in combination with kinesthetic demonstrations when using an autonomous dynamical system to encode motion. We present the results of a user study comparing three demonstrations interfaces for a manipulation task on a KUKA LWR robot.
{"title":"Combined kinesthetic and simulated interface for teaching robot motion models","authors":"Elizabeth Cha, Klas Kronander, A. Billard","doi":"10.1109/ROMAN.2015.7333655","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333655","url":null,"abstract":"The success of a Learning from Demonstration system depends on the quality of the demonstrated data. Kinesthetic demonstrations are often assumed to be the best method of providing demonstrations for manipulation tasks, however, there is little research to support this. In this work, we explore the use of a simulated environment as an alternative to and in combination with kinesthetic demonstrations when using an autonomous dynamical system to encode motion. We present the results of a user study comparing three demonstrations interfaces for a manipulation task on a KUKA LWR robot.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132491304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333599
Christopher Allan, M. Couceiro, P. A. Vargas
Memory is central to the emotional experience of playing sports. The capacity to recall great achievements, triumphs and defeats inevitably influences the emotional state of athletes and people in general. Nevertheless, research on robot competitions that has been striving to mimic real-world soccer, such as the well-known RoboCup challenge, never considered the relevance of memory and emotions, nor their possible connection. This paper proposes a data mining approach to emotional memory modelling with the purpose of replicating the link between emotion and memory in a Ro-boCup scenario. A model of emotional fluctuations is also proposed based on neurological disorders to investigate their effect on the robot's ability to choose appropriate behaviours. The proposed model is evaluated using the NAO robot on a simulation environment. By utilizing emotion to assess memories stored, NAO was able to successfully choose behaviours based on the optimal outcomes achieved in the past.
{"title":"Uncovering emotional memories in robot soccer players","authors":"Christopher Allan, M. Couceiro, P. A. Vargas","doi":"10.1109/ROMAN.2015.7333599","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333599","url":null,"abstract":"Memory is central to the emotional experience of playing sports. The capacity to recall great achievements, triumphs and defeats inevitably influences the emotional state of athletes and people in general. Nevertheless, research on robot competitions that has been striving to mimic real-world soccer, such as the well-known RoboCup challenge, never considered the relevance of memory and emotions, nor their possible connection. This paper proposes a data mining approach to emotional memory modelling with the purpose of replicating the link between emotion and memory in a Ro-boCup scenario. A model of emotional fluctuations is also proposed based on neurological disorders to investigate their effect on the robot's ability to choose appropriate behaviours. The proposed model is evaluated using the NAO robot on a simulation environment. By utilizing emotion to assess memories stored, NAO was able to successfully choose behaviours based on the optimal outcomes achieved in the past.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130906253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333633
J. Vroon, M. Joosse, M. Lohse, Jan Kolkmeier, Jaebok Kim, K. Truong, G. Englebienne, D. Heylen, V. Evers
When a mobile robot interacts with a group of people, it has to consider its position and orientation. We introduce a novel study aimed at generating hypotheses on suitable behavior for such social positioning, explicitly focusing on interaction with small groups of users and allowing for the temporal and social dynamics inherent in most interactions. In particular, the interactions we look at are approach, converse and retreat. In this study, groups of three participants and a telepresence robot (controlled remotely by a fourth participant) solved a task together while we collected quantitative and qualitative data, including tracking of positioning/orientation and ratings of the behaviors used. In the data we observed a variety of patterns that can be extrapolated to hypotheses using inductive reasoning. One such pattern/hypothesis is that a (telepresence) robot could pass through a group when retreating, without this affecting how comfortable that retreat is for the group members. Another is that a group will rate the position/orientation of a (telepresence) robot as more comfortable when it is aimed more at the center of that group.
{"title":"Dynamics of social positioning patterns in group-robot interactions","authors":"J. Vroon, M. Joosse, M. Lohse, Jan Kolkmeier, Jaebok Kim, K. Truong, G. Englebienne, D. Heylen, V. Evers","doi":"10.1109/ROMAN.2015.7333633","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333633","url":null,"abstract":"When a mobile robot interacts with a group of people, it has to consider its position and orientation. We introduce a novel study aimed at generating hypotheses on suitable behavior for such social positioning, explicitly focusing on interaction with small groups of users and allowing for the temporal and social dynamics inherent in most interactions. In particular, the interactions we look at are approach, converse and retreat. In this study, groups of three participants and a telepresence robot (controlled remotely by a fourth participant) solved a task together while we collected quantitative and qualitative data, including tracking of positioning/orientation and ratings of the behaviors used. In the data we observed a variety of patterns that can be extrapolated to hypotheses using inductive reasoning. One such pattern/hypothesis is that a (telepresence) robot could pass through a group when retreating, without this affecting how comfortable that retreat is for the group members. Another is that a group will rate the position/orientation of a (telepresence) robot as more comfortable when it is aimed more at the center of that group.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125746176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333565
Marwin Sorce, G. Pointeau, Maxime Petit, Anne-Laure Mealier, G. Gibert, Peter Ford Dominey
With the Robonaut-2 humanoid robot now permanently flying on the ISS, the potential role for robots participating in cooperative activity in space is becoming a reality. Recent research has demonstrated that cooperation in the joint achievement of shared goals is a promising framework for human interaction with robots, with application in space. Perhaps more importantly, with the turn-over of crew members, robots could play an important role in maintaining and transferring expertise between outgoing and incoming crews. In this context, the current research builds on our experience in systems for cooperative human-robot interaction, introducing novel interface and interaction modalities that exploit the long-term experience of the robot. We implement a system where the human agent can teach the Nao humanoid new actions by physical demonstration, visual imitation, and spoken command. These actions can then be composed into joint action plans that coordinate the cooperation between agent and human. We also implement algorithms for an Autobiographical Memory (ABM) that provides access to of all of the robots interaction experience. These functions are assembled in a novel interaction paradigm for the capture, maintenance and transfer of knowledge in a five-tiered structure. The five tiers allow the robot to 1) learn simple behaviors, 2) learn shared plans composed from the learned behaviors, 3) execute the learned shared plans efficiently, 4) teach shared plans to new humans, and 5) answer questions from the human to better understand the origin of the shared plan. Our results demonstrate the feasibility of this system and indicate that such humanoid robot systems will provide a potential mechanism for the accumulation and transfer of knowledge, between humans who are not co-present. Applications to space flight operations as a target scenario are discussed.
{"title":"Proof of concept for a user-centered system for sharing cooperative plan knowledge over extended periods and crew changes in space-flight operations","authors":"Marwin Sorce, G. Pointeau, Maxime Petit, Anne-Laure Mealier, G. Gibert, Peter Ford Dominey","doi":"10.1109/ROMAN.2015.7333565","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333565","url":null,"abstract":"With the Robonaut-2 humanoid robot now permanently flying on the ISS, the potential role for robots participating in cooperative activity in space is becoming a reality. Recent research has demonstrated that cooperation in the joint achievement of shared goals is a promising framework for human interaction with robots, with application in space. Perhaps more importantly, with the turn-over of crew members, robots could play an important role in maintaining and transferring expertise between outgoing and incoming crews. In this context, the current research builds on our experience in systems for cooperative human-robot interaction, introducing novel interface and interaction modalities that exploit the long-term experience of the robot. We implement a system where the human agent can teach the Nao humanoid new actions by physical demonstration, visual imitation, and spoken command. These actions can then be composed into joint action plans that coordinate the cooperation between agent and human. We also implement algorithms for an Autobiographical Memory (ABM) that provides access to of all of the robots interaction experience. These functions are assembled in a novel interaction paradigm for the capture, maintenance and transfer of knowledge in a five-tiered structure. The five tiers allow the robot to 1) learn simple behaviors, 2) learn shared plans composed from the learned behaviors, 3) execute the learned shared plans efficiently, 4) teach shared plans to new humans, and 5) answer questions from the human to better understand the origin of the shared plan. Our results demonstrate the feasibility of this system and indicate that such humanoid robot systems will provide a potential mechanism for the accumulation and transfer of knowledge, between humans who are not co-present. Applications to space flight operations as a target scenario are discussed.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129727448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}