Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251428
Yiming Yang, V. Ivan, S. Vijayakumar
Reacting to environment changes is a big challenge for real world robot applications. This paper presents a novel approach that allows the robot to quickly adapt to changes, particularly in the presence of moving targets and dynamic obstacles. Typically, a configuration space replanning or adaptation is required if the environment is changed. Rather, our method aims to maintain a plan, in a relative distance space rather than configuration space, that can be valid in different environments. In addition, we introduce an incremental planning structure that allows us to handle unexpected obstacles that may appear during execution. The main contribution is that the relative distance space representation encodes pose re-targeting, reaching and avoiding tasks within one unified cost term that can be solved in real-time to achieve a fast implementation for high degree of freedom (DOF) robots. We evaluate our method on a 7 DOF LWR robot arm, and a 14 DOF dual-arm Baxter robot.
{"title":"Real-time motion adaptation using relative distance space representation","authors":"Yiming Yang, V. Ivan, S. Vijayakumar","doi":"10.1109/ICAR.2015.7251428","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251428","url":null,"abstract":"Reacting to environment changes is a big challenge for real world robot applications. This paper presents a novel approach that allows the robot to quickly adapt to changes, particularly in the presence of moving targets and dynamic obstacles. Typically, a configuration space replanning or adaptation is required if the environment is changed. Rather, our method aims to maintain a plan, in a relative distance space rather than configuration space, that can be valid in different environments. In addition, we introduce an incremental planning structure that allows us to handle unexpected obstacles that may appear during execution. The main contribution is that the relative distance space representation encodes pose re-targeting, reaching and avoiding tasks within one unified cost term that can be solved in real-time to achieve a fast implementation for high degree of freedom (DOF) robots. We evaluate our method on a 7 DOF LWR robot arm, and a 14 DOF dual-arm Baxter robot.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"5-6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114188166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251481
A. Safa, M. Naraghi, A. Alasty
Recently, it has been proved that a different switching surface can preserve the walking trajectory while varying the walking stability [1], [2]. In this paper, by employing the simplest passive dynamic biped, we optimize the switching surface to maximize the robot's stability. Here, the stability measure is preferably the size of the basin of attraction, i.e. the collection of all possible initial conditions leading to the system's equilibrium point. Numerical investigations indicate that the maximum stability is obtained for neither the highest nor the lowest walking speed.
{"title":"Optimization of the switching surface for the simplest passive dynamic biped","authors":"A. Safa, M. Naraghi, A. Alasty","doi":"10.1109/ICAR.2015.7251481","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251481","url":null,"abstract":"Recently, it has been proved that a different switching surface can preserve the walking trajectory while varying the walking stability [1], [2]. In this paper, by employing the simplest passive dynamic biped, we optimize the switching surface to maximize the robot's stability. Here, the stability measure is preferably the size of the basin of attraction, i.e. the collection of all possible initial conditions leading to the system's equilibrium point. Numerical investigations indicate that the maximum stability is obtained for neither the highest nor the lowest walking speed.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114292072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251463
N. Mavridis, S. Kundig, N. Kapellas
Compositionality is a property of natural language which is of prime importance: It enables humans to form and conceptualize potentially novel and complex ideas, by combining words. On the other hand, the symbol grounding problem examines the way meaning is anchored to entities external to language, such as sensory percepts and sensory-motor routines. In this paper we aim towards the exploration of the intersection of compositionality and symbol grounding. We thus propose a methodology for constructing empirically derived models of grounded meaning, which afford composition of grounded semantics. We illustrate our methodology for the case of adjectival modifiers. Grounded models of adjectively modified and unmodified colors are acquired through a specially designed procedure with 134 participants, and then computational models of the modifiers “dark” and “light” are derived. The generalization ability of these learnt models is quantitatively evaluated, and their usage is demonstrated in a real-world physical humanoid robot. We regard this as an important step towards extending empirical approaches for symbol grounding so that they can accommodate compositionality: a necessary step towards the deep understanding of natural language for situated embodied agents, such as sensor-enabled ambient intelligence and interactive robots.
{"title":"Acquisition of grounded models of adjectival modifiers supporting semantic composition and transfer to a physical interactive robot","authors":"N. Mavridis, S. Kundig, N. Kapellas","doi":"10.1109/ICAR.2015.7251463","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251463","url":null,"abstract":"Compositionality is a property of natural language which is of prime importance: It enables humans to form and conceptualize potentially novel and complex ideas, by combining words. On the other hand, the symbol grounding problem examines the way meaning is anchored to entities external to language, such as sensory percepts and sensory-motor routines. In this paper we aim towards the exploration of the intersection of compositionality and symbol grounding. We thus propose a methodology for constructing empirically derived models of grounded meaning, which afford composition of grounded semantics. We illustrate our methodology for the case of adjectival modifiers. Grounded models of adjectively modified and unmodified colors are acquired through a specially designed procedure with 134 participants, and then computational models of the modifiers “dark” and “light” are derived. The generalization ability of these learnt models is quantitatively evaluated, and their usage is demonstrated in a real-world physical humanoid robot. We regard this as an important step towards extending empirical approaches for symbol grounding so that they can accommodate compositionality: a necessary step towards the deep understanding of natural language for situated embodied agents, such as sensor-enabled ambient intelligence and interactive robots.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114576367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251503
L. Zaidi, B. Bouzgarrou, L. Sabourin, Y. Mezouar
Robotic grasping has been extensively studied in the last two decades. Most of the research in this field has been dedicated to rigid body grasping, and only a few studies have considered the case of deformable objects. Nevertheless, the robotized grasping of deformable objects has many potential applications in various areas, including bio-medical processing, the food processing industry, service robotics, robotized surgery, etc. This paper discusses the problem of modeling interactions between a multi-fingered hand and a 3D deformable object for robotic grasping and manipulation tasks. Our work presents a new strategy for modeling contact interactions in order to define the relationship between applied forces and object deformations. The mechanical behavior of the deformable object is modeled using a non-linear anisotropic mass-spring system. Contact forces are generated according to the relative positions and velocities between the fingertips and the boundary surface facets of the deformable object mesh. This approach allows reducing the number of nodes while ensuring accurate contact modeling.
{"title":"Interaction modeling in the grasping and manipulation of 3D deformable objects","authors":"L. Zaidi, B. Bouzgarrou, L. Sabourin, Y. Mezouar","doi":"10.1109/ICAR.2015.7251503","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251503","url":null,"abstract":"Robotic grasping has been extensively studied in the last two decades. Most of the research in this field has been dedicated to rigid body grasping, and only a few studies have considered the case of deformable objects. Nevertheless, the robotized grasping of deformable objects has many potential applications in various areas, including bio-medical processing, the food processing industry, service robotics, robotized surgery, etc. This paper discusses the problem of modeling interactions between a multi-fingered hand and a 3D deformable object for robotic grasping and manipulation tasks. Our work presents a new strategy for modeling contact interactions in order to define the relationship between applied forces and object deformations. The mechanical behavior of the deformable object is modeled using a non-linear anisotropic mass-spring system. Contact forces are generated according to the relative positions and velocities between the fingertips and the boundary surface facets of the deformable object mesh. This approach allows reducing the number of nodes while ensuring accurate contact modeling.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121648669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251458
Mehdi Jalalmaab, B. Fidan, Soo Jeon, P. Falcone
A model predictive controller (MPC) with time-varying safety constraints for highway path planning with collision avoidance is developed in this paper. The collision avoidance constraints with the surrounding vehicles are integrated with the road geometry constraints, leading to a set of convex time-varying safety constraints. The surrounding vehicles' dynamics and decisions are also incorporated in the prediction model to predict the trajectories. The proposed controller finds the best combination of longitudinal and lateral acceleration commands to guide the vehicle while avoiding collision with surrounding vehicles by possibly overtaking. Simulation results verify successful performance of this predictive control strategy for different scenarios.
{"title":"Model predictive path planning with time-varying safety constraints for highway autonomous driving","authors":"Mehdi Jalalmaab, B. Fidan, Soo Jeon, P. Falcone","doi":"10.1109/ICAR.2015.7251458","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251458","url":null,"abstract":"A model predictive controller (MPC) with time-varying safety constraints for highway path planning with collision avoidance is developed in this paper. The collision avoidance constraints with the surrounding vehicles are integrated with the road geometry constraints, leading to a set of convex time-varying safety constraints. The surrounding vehicles' dynamics and decisions are also incorporated in the prediction model to predict the trajectories. The proposed controller finds the best combination of longitudinal and lateral acceleration commands to guide the vehicle while avoiding collision with surrounding vehicles by possibly overtaking. Simulation results verify successful performance of this predictive control strategy for different scenarios.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121045810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251432
Mert Kaya, Enes Senel, Awais Ahmad, Orcun Orhan, O. Bebek
In this paper, real-time needle tip tracking method using 2D ultrasound (US) images for robotic biopsies is presented. In this method, the needle tip is estimated with the Gabor filter based image processing algorithm, and the estimation noise is reduced with the Kalman filter. This paper also presents the needle tip tracking simulation to test accuracy of the Kalman filter under position misalignments and tissue deformations. In order to execute proposed method in real-time, the bin packing method is used and the processing time is reduced by 56%, without a GPU. The proposed method was tested in four different phantoms and water medium. The accuracy of the needle tip estimation was measured with optical tracking system, and root mean square error (RMS) of the tip position is found to be 1.17 mm. The experiments showed that the algorithm could track the needle tip in real-time.
{"title":"Real-time needle tip localization in 2D ultrasound images for robotic biopsies","authors":"Mert Kaya, Enes Senel, Awais Ahmad, Orcun Orhan, O. Bebek","doi":"10.1109/ICAR.2015.7251432","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251432","url":null,"abstract":"In this paper, real-time needle tip tracking method using 2D ultrasound (US) images for robotic biopsies is presented. In this method, the needle tip is estimated with the Gabor filter based image processing algorithm, and the estimation noise is reduced with the Kalman filter. This paper also presents the needle tip tracking simulation to test accuracy of the Kalman filter under position misalignments and tissue deformations. In order to execute proposed method in real-time, the bin packing method is used and the processing time is reduced by 56%, without a GPU. The proposed method was tested in four different phantoms and water medium. The accuracy of the needle tip estimation was measured with optical tracking system, and root mean square error (RMS) of the tip position is found to be 1.17 mm. The experiments showed that the algorithm could track the needle tip in real-time.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122681585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251462
Robert Bevec, A. Ude
In this paper we address the problem of autonomous learning of visual appearance of unknown objects. We propose a method that integrates foveated vision on a humanoid robot with autonomous object discovery and explorative manipulation actions such as pushing, grasping, and in-hand rotation. The humanoid robot starts by searching for objects in a visual scene and generating hypotheses about which parts of the visual scene could constitute an object. The hypothetical objects are verified by applying pushing actions, where the existence of an object is considered confirmed if the visual features exhibit rigid body motion. In our previous work we showed that partial object models can be learnt by a sequential application of several robot pushes, which generates the views of object appearance from different viewpoints. However, with this approach it is not possible to guarantee that the object will be seen from all relevant viewpoints even after a large number of pushes have been carried out. Instead, in this paper we show that confirmed object hypotheses contain enough information to enable grasping and that object models can be acquired more effectively by sequentially rotating the object. We show the effectiveness of our new system by comparing object recognition results after the robot learns object models by two different approaches: 1. learning from images acquired by several pushes and 2. learning from images acquired by an initial push followed by several grasp-rotate-release action cycles.
{"title":"Pushing and grasping for autonomous learning of object models with foveated vision","authors":"Robert Bevec, A. Ude","doi":"10.1109/ICAR.2015.7251462","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251462","url":null,"abstract":"In this paper we address the problem of autonomous learning of visual appearance of unknown objects. We propose a method that integrates foveated vision on a humanoid robot with autonomous object discovery and explorative manipulation actions such as pushing, grasping, and in-hand rotation. The humanoid robot starts by searching for objects in a visual scene and generating hypotheses about which parts of the visual scene could constitute an object. The hypothetical objects are verified by applying pushing actions, where the existence of an object is considered confirmed if the visual features exhibit rigid body motion. In our previous work we showed that partial object models can be learnt by a sequential application of several robot pushes, which generates the views of object appearance from different viewpoints. However, with this approach it is not possible to guarantee that the object will be seen from all relevant viewpoints even after a large number of pushes have been carried out. Instead, in this paper we show that confirmed object hypotheses contain enough information to enable grasping and that object models can be acquired more effectively by sequentially rotating the object. We show the effectiveness of our new system by comparing object recognition results after the robot learns object models by two different approaches: 1. learning from images acquired by several pushes and 2. learning from images acquired by an initial push followed by several grasp-rotate-release action cycles.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129219635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251480
Ismail Uyanik, M. M. Ankaralı, N. Cowan, Ö. Morgül, U. Saranlı
There are limitations on the extent to which manually constructed mathematical models can capture relevant aspects of legged locomotion. Even simple models for basic behaviours such as running involve non-integrable dynamics, requiring the use of possibly inaccurate approximations in the design of model-based controllers. In this study, we show how data-driven frequency domain system identification methods can be used to obtain input-output characteristics for a class of dynamical systems around their limit cycles, with hybrid structural properties similar to those observed in legged locomotion systems. Under certain assumptions, we can approximate hybrid dynamics of such systems around their limit cycle as a piecewise smooth linear time periodic system (LTP), further approximated as a time-periodic, piecewise LTI system to reduce parametric degrees of freedom in the identification process. In this paper, we use a simple one-dimensional hybrid model in which a limit-cycle is induced through the actions of a linear actuator to illustrate the details of our method. We first derive theoretical harmonic transfer functions (HTFs) of our example model. We then excite the model with small chirp signals to introduce perturbations around its limit-cycle and present systematic identification results to estimate the HTFs for this model. Comparison between the data-driven HTFs model and its theoretical prediction illustrates the potential effectiveness of such empirical identification methods in legged locomotion.
{"title":"Toward data-driven models of legged locomotion using harmonic transfer functions","authors":"Ismail Uyanik, M. M. Ankaralı, N. Cowan, Ö. Morgül, U. Saranlı","doi":"10.1109/ICAR.2015.7251480","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251480","url":null,"abstract":"There are limitations on the extent to which manually constructed mathematical models can capture relevant aspects of legged locomotion. Even simple models for basic behaviours such as running involve non-integrable dynamics, requiring the use of possibly inaccurate approximations in the design of model-based controllers. In this study, we show how data-driven frequency domain system identification methods can be used to obtain input-output characteristics for a class of dynamical systems around their limit cycles, with hybrid structural properties similar to those observed in legged locomotion systems. Under certain assumptions, we can approximate hybrid dynamics of such systems around their limit cycle as a piecewise smooth linear time periodic system (LTP), further approximated as a time-periodic, piecewise LTI system to reduce parametric degrees of freedom in the identification process. In this paper, we use a simple one-dimensional hybrid model in which a limit-cycle is induced through the actions of a linear actuator to illustrate the details of our method. We first derive theoretical harmonic transfer functions (HTFs) of our example model. We then excite the model with small chirp signals to introduce perturbations around its limit-cycle and present systematic identification results to estimate the HTFs for this model. Comparison between the data-driven HTFs model and its theoretical prediction illustrates the potential effectiveness of such empirical identification methods in legged locomotion.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129233183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251520
Bogdan-Florin Florea, O. Grigore, M. Datcu
In this paper we introduce a novel spatial exploration and coverage algorithm based on reflex agents using a pheromone map as storage and communication medium. The algorithm proposed in this paper outperforms many of the popular reflex agent exploration algorithms in terms of exploration performance measured as the cumulative path length.
{"title":"Pheromone averaging exploration algorithm","authors":"Bogdan-Florin Florea, O. Grigore, M. Datcu","doi":"10.1109/ICAR.2015.7251520","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251520","url":null,"abstract":"In this paper we introduce a novel spatial exploration and coverage algorithm based on reflex agents using a pheromone map as storage and communication medium. The algorithm proposed in this paper outperforms many of the popular reflex agent exploration algorithms in terms of exploration performance measured as the cumulative path length.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129053623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251437
M. Zamani, Erhan Öztop
In this paper, we propose and implement a human-in-the loop robot skill synthesis framework that involves simultaneous adaptation of the human and the robot. In this framework, the human demonstrator learns to control the robot in real-time to make it perform a given task. At the same time, the robot learns from the human guided control creating a non-trivial coupled dynamical system. The research question we address is how this system can be tuned to facilitate faster skill transfer or improve the performance level of the transferred skill. In the current paper we report our initial work for the latter. At the beginning of the skill transfer session, the human demonstrator controls the robot exclusively as in teleoperation. As the task performance improves the robot takes increasingly more share in control, eventually reaching full autonomy. The proposed framework is implemented and shown to work on a physical cart-pole setup. To assess whether simultaneous learning has advantage over the standard sequential learning (where the robot learns from the human observation but does not interfere with the control) experiments with two groups of subjects were performed. The results indicate that the final autonomous controller obtained via simultaneous learning has a higher performance measured as the average deviation from the upright posture of the pole.
{"title":"Simultaneous human-robot adaptation for effective skill transfer","authors":"M. Zamani, Erhan Öztop","doi":"10.1109/ICAR.2015.7251437","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251437","url":null,"abstract":"In this paper, we propose and implement a human-in-the loop robot skill synthesis framework that involves simultaneous adaptation of the human and the robot. In this framework, the human demonstrator learns to control the robot in real-time to make it perform a given task. At the same time, the robot learns from the human guided control creating a non-trivial coupled dynamical system. The research question we address is how this system can be tuned to facilitate faster skill transfer or improve the performance level of the transferred skill. In the current paper we report our initial work for the latter. At the beginning of the skill transfer session, the human demonstrator controls the robot exclusively as in teleoperation. As the task performance improves the robot takes increasingly more share in control, eventually reaching full autonomy. The proposed framework is implemented and shown to work on a physical cart-pole setup. To assess whether simultaneous learning has advantage over the standard sequential learning (where the robot learns from the human observation but does not interfere with the control) experiments with two groups of subjects were performed. The results indicate that the final autonomous controller obtained via simultaneous learning has a higher performance measured as the average deviation from the upright posture of the pole.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131043054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}