Pub Date : 2016-10-09DOI: 10.1109/IROS.2016.7758090
F. I. Muñoz, Andrew I. Comport
The objective of this paper is to investigate the problem of how to best combine and fuse color and depth measurements for incremental pose estimation or 3D tracking. Subsequently a framework will be proposed that allows to formulate the problem with a unique measurement vector and not to combine them in an ad-hoc manner. In particular, the full color and depth measurement will be defined as a 4-vector (by combining 3D Euclidean points + image intensities) and an optimal error for pose estimation will be derived from this. As will be shown, this will lead to designing an iterative closest point approach in 4 dimensional space. A kd-tree is used to find the closest point in 4D-space, therefore simultaneously accounting for color and depth. Based on this unified framework a novel Point-to-hyperplane approach will be introduced which has the advantages of classic Point-to-plane ICP but in 4D-space. By doing this it will be shown that there is no longer any need to provide or estimate a scale factor between different measurement types. Consequently, this allows to increase the convergence domain and speed up the alignment, whilst maintaining the robust and accurate properties. Results on both simulated and real environments will be provided along with benchmark comparisons.
{"title":"Point-to-hyperplane RGB-D pose estimation: Fusing photometric and geometric measurements","authors":"F. I. Muñoz, Andrew I. Comport","doi":"10.1109/IROS.2016.7758090","DOIUrl":"https://doi.org/10.1109/IROS.2016.7758090","url":null,"abstract":"The objective of this paper is to investigate the problem of how to best combine and fuse color and depth measurements for incremental pose estimation or 3D tracking. Subsequently a framework will be proposed that allows to formulate the problem with a unique measurement vector and not to combine them in an ad-hoc manner. In particular, the full color and depth measurement will be defined as a 4-vector (by combining 3D Euclidean points + image intensities) and an optimal error for pose estimation will be derived from this. As will be shown, this will lead to designing an iterative closest point approach in 4 dimensional space. A kd-tree is used to find the closest point in 4D-space, therefore simultaneously accounting for color and depth. Based on this unified framework a novel Point-to-hyperplane approach will be introduced which has the advantages of classic Point-to-plane ICP but in 4D-space. By doing this it will be shown that there is no longer any need to provide or estimate a scale factor between different measurement types. Consequently, this allows to increase the convergence domain and speed up the alignment, whilst maintaining the robust and accurate properties. Results on both simulated and real environments will be provided along with benchmark comparisons.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124933673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-09DOI: 10.1109/IROS.2016.7759742
Claudio Coppola, D. Faria, U. Nunes, N. Bellotto
Social activity based on body motion is a key feature for non-verbal and physical behavior defined as function for communicative signal and social interaction between individuals. Social activity recognition is important to study human-human communication and also human-robot interaction. Based on that, this research has threefold goals: (1) recognition of social behavior (e.g. human-human interaction) using a probabilistic approach that merges spatio-temporal features from individual bodies and social features from the relationship between two individuals; (2) learn priors based on physical proximity between individuals during an interaction using proxemics theory to feed a probabilistic ensemble of activity classifiers; and (3) provide a public dataset with RGB-D data of social daily activities including risk situations useful to test approaches for assisted living, since this type of dataset is still missing. Results show that using the proposed approach designed to merge features with different semantics and proximity priors improves the classification performance in terms of precision, recall and accuracy when compared with other approaches that employ alternative strategies.
{"title":"Social activity recognition based on probabilistic merging of skeleton features with proximity priors from RGB-D data","authors":"Claudio Coppola, D. Faria, U. Nunes, N. Bellotto","doi":"10.1109/IROS.2016.7759742","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759742","url":null,"abstract":"Social activity based on body motion is a key feature for non-verbal and physical behavior defined as function for communicative signal and social interaction between individuals. Social activity recognition is important to study human-human communication and also human-robot interaction. Based on that, this research has threefold goals: (1) recognition of social behavior (e.g. human-human interaction) using a probabilistic approach that merges spatio-temporal features from individual bodies and social features from the relationship between two individuals; (2) learn priors based on physical proximity between individuals during an interaction using proxemics theory to feed a probabilistic ensemble of activity classifiers; and (3) provide a public dataset with RGB-D data of social daily activities including risk situations useful to test approaches for assisted living, since this type of dataset is still missing. Results show that using the proposed approach designed to merge features with different semantics and proximity priors improves the classification performance in terms of precision, recall and accuracy when compared with other approaches that employ alternative strategies.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125167941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-09DOI: 10.1109/IROS.2016.7759264
Teodor Tomic, Korbinian Schmid, P. Lutz, Andrew Mathers, S. Haddadin
We consider the problem of estimating the wind velocity perceived by a flying multicopter, from data acquired by onboard sensors and knowledge of its aerodynamics model only. We employ two complementary methods. The first is based on the estimation of the external wrench (force and torque) due to aerodynamics acting on the robot in flight. Wind velocity is obtained by inverting an identified model of the aerodynamic forces. The second method is based on the estimation of the propeller aerodynamic power, and provides an estimate independent of other sensors. We show how to calculate components of the wind velocity using multiple aerodynamic power measurements, when the poses between them are known. The method uses the motor current and angular velocity as measured by the electronic speed controllers, essentially using the propellers as wind sensors. Verification of the methods and model identification were done using measurements acquired during autonomous flights in a 3D wind tunnel.
{"title":"The flying anemometer: Unified estimation of wind velocity from aerodynamic power and wrenches","authors":"Teodor Tomic, Korbinian Schmid, P. Lutz, Andrew Mathers, S. Haddadin","doi":"10.1109/IROS.2016.7759264","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759264","url":null,"abstract":"We consider the problem of estimating the wind velocity perceived by a flying multicopter, from data acquired by onboard sensors and knowledge of its aerodynamics model only. We employ two complementary methods. The first is based on the estimation of the external wrench (force and torque) due to aerodynamics acting on the robot in flight. Wind velocity is obtained by inverting an identified model of the aerodynamic forces. The second method is based on the estimation of the propeller aerodynamic power, and provides an estimate independent of other sensors. We show how to calculate components of the wind velocity using multiple aerodynamic power measurements, when the poses between them are known. The method uses the motor current and angular velocity as measured by the electronic speed controllers, essentially using the propellers as wind sensors. Verification of the methods and model identification were done using measurements acquired during autonomous flights in a 3D wind tunnel.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130128720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-09DOI: 10.1109/IROS.2016.7759810
Zhongkai Zhang, Jérémie Dequidt, A. Kruszewski, F. Largilliere, C. Duriez
This paper aims at providing a novel approach to modeling and controlling soft robots. Based on real-time Finite Element Method (FEM), we obtain a globally defined discrete-time kinematic model in the workspace of soft robots. From the kinematic equations, we deduce the soft-robot Jacobian matrix and discuss the conditions to avoid singular configurations. Then, we propose a novel observer based control methodology where the observer is built by Finite Element Model in this paper to deal with the control problem of soft robots. A closed-loop controller for position control of soft robot is designed based on the discrete-time model with feedback signal being extracted by means of visual servoing. Finally, experimental results on a parallel soft robot show the efficiency and performance of our proposed controller.
{"title":"Kinematic modeling and observer based control of soft robot using real-time Finite Element Method","authors":"Zhongkai Zhang, Jérémie Dequidt, A. Kruszewski, F. Largilliere, C. Duriez","doi":"10.1109/IROS.2016.7759810","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759810","url":null,"abstract":"This paper aims at providing a novel approach to modeling and controlling soft robots. Based on real-time Finite Element Method (FEM), we obtain a globally defined discrete-time kinematic model in the workspace of soft robots. From the kinematic equations, we deduce the soft-robot Jacobian matrix and discuss the conditions to avoid singular configurations. Then, we propose a novel observer based control methodology where the observer is built by Finite Element Model in this paper to deal with the control problem of soft robots. A closed-loop controller for position control of soft robot is designed based on the discrete-time model with feedback signal being extracted by means of visual servoing. Finally, experimental results on a parallel soft robot show the efficiency and performance of our proposed controller.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116504672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-09DOI: 10.1109/IROS.2016.7759748
Fabrizio Schiano, A. Franchi, Daniel Zelazo, P. Giordano
This paper considers the problem of controlling a formation of quadrotor UAVs equipped with onboard cameras able to measure relative bearings in their local body frames w.r.t. neighboring UAVs. The control goal is twofold: (i) steering the agent group towards a formation defined in terms of desired bearings, and (ii) actuating the group motions in the `null-space' of the current bearing formation. The proposed control strategy relies on an extension of the rigidity theory to the case of directed bearing frameworks in ℝ3×S1. This extension allows to devise a decentralized bearing controller which, unlike most of the present literature, does not need presence of a common reference frame or of reciprocal bearing measurements for the agents. Simulation and experimental results are then presented for illustrating and validating the approach.
{"title":"A rigidity-based decentralized bearing formation controller for groups of quadrotor UAVs","authors":"Fabrizio Schiano, A. Franchi, Daniel Zelazo, P. Giordano","doi":"10.1109/IROS.2016.7759748","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759748","url":null,"abstract":"This paper considers the problem of controlling a formation of quadrotor UAVs equipped with onboard cameras able to measure relative bearings in their local body frames w.r.t. neighboring UAVs. The control goal is twofold: (i) steering the agent group towards a formation defined in terms of desired bearings, and (ii) actuating the group motions in the `null-space' of the current bearing formation. The proposed control strategy relies on an extension of the rigidity theory to the case of directed bearing frameworks in ℝ3×S1. This extension allows to devise a decentralized bearing controller which, unlike most of the present literature, does not need presence of a common reference frame or of reciprocal bearing measurements for the agents. Simulation and experimental results are then presented for illustrating and validating the approach.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123412852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-09DOI: 10.1109/IROS.2016.7759093
S. Scheggi, ChangKyu Yoon, D. Gracias, S. Misra
Micro-sized agents can benefit robotic minimally invasive surgery since they can be inserted into the human body and use natural pathways such as arteries and veins or the gastrointestinal tract, to reach their target for drug delivery or diagnosis. Recently, miniaturized agents with shape-changing and gripping capabilities have provided significant advantages in performing grasping, transportation, and manipulation tasks. In order to robustly perform such tasks, it is of utmost importance to properly estimate their overall configuration. This paper presents a novel solution to the problem of estimating and tracking the 3D position, orientation and configuration of the tips of miniaturized grippers from RGB marker-less visual observations obtained by a microscope. We consider this as an optimization problem, seeking for the gripper model parameters that minimize the discrepancy between hypothesized instances of the gripper model and actual observations of the miniaturized gripper. This optimization problem is solved using a variant of the Particle Swarm Optimization algorithm. The proposed approach has been evaluated on several image sequences showing the grippers moving, rotating, opening/closing and grasping biological material.
{"title":"Model-based tracking of miniaturized grippers using Particle Swarm Optimization","authors":"S. Scheggi, ChangKyu Yoon, D. Gracias, S. Misra","doi":"10.1109/IROS.2016.7759093","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759093","url":null,"abstract":"Micro-sized agents can benefit robotic minimally invasive surgery since they can be inserted into the human body and use natural pathways such as arteries and veins or the gastrointestinal tract, to reach their target for drug delivery or diagnosis. Recently, miniaturized agents with shape-changing and gripping capabilities have provided significant advantages in performing grasping, transportation, and manipulation tasks. In order to robustly perform such tasks, it is of utmost importance to properly estimate their overall configuration. This paper presents a novel solution to the problem of estimating and tracking the 3D position, orientation and configuration of the tips of miniaturized grippers from RGB marker-less visual observations obtained by a microscope. We consider this as an optimization problem, seeking for the gripper model parameters that minimize the discrepancy between hypothesized instances of the gripper model and actual observations of the miniaturized gripper. This optimization problem is solved using a variant of the Particle Swarm Optimization algorithm. The proposed approach has been evaluated on several image sequences showing the grippers moving, rotating, opening/closing and grasping biological material.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131702675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-09DOI: 10.1109/IROS.2016.7759411
A. Abdolmaleki, N. Lau, Luis Paulo Reis, G. Neumann
Stochastic search algorithms are black-box optimizer of an objective function. They have recently gained a lot of attention in operations research, machine learning and policy search of robot motor skills due to their ease of use and their generality. Yet, many stochastic search algorithms require relearning if the task or objective function changes slightly to adapt the solution to the new situation or the new context. In this paper, we consider the contextual stochastic search setup. Here, we want to find multiple good parameter vectors for multiple related tasks, where each task is described by a continuous context vector. Hence, the objective function might change slightly for each parameter vector evaluation of a task or context. Contextual algorithms have been investigated in the field of policy search, however, the search distribution typically uses a parametric model that is linear in the some hand-defined context features. Finding good context features is a challenging task, and hence, non-parametric methods are often preferred over their parametric counter-parts. In this paper, we propose a non-parametric contextual stochastic search algorithm that can learn a non-parametric search distribution for multiple tasks simultaneously. In difference to existing methods, our method can also learn a context dependent covariance matrix that guides the exploration of the search process. We illustrate its performance on several non-linear contextual tasks.
{"title":"Non-parametric contextual stochastic search","authors":"A. Abdolmaleki, N. Lau, Luis Paulo Reis, G. Neumann","doi":"10.1109/IROS.2016.7759411","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759411","url":null,"abstract":"Stochastic search algorithms are black-box optimizer of an objective function. They have recently gained a lot of attention in operations research, machine learning and policy search of robot motor skills due to their ease of use and their generality. Yet, many stochastic search algorithms require relearning if the task or objective function changes slightly to adapt the solution to the new situation or the new context. In this paper, we consider the contextual stochastic search setup. Here, we want to find multiple good parameter vectors for multiple related tasks, where each task is described by a continuous context vector. Hence, the objective function might change slightly for each parameter vector evaluation of a task or context. Contextual algorithms have been investigated in the field of policy search, however, the search distribution typically uses a parametric model that is linear in the some hand-defined context features. Finding good context features is a challenging task, and hence, non-parametric methods are often preferred over their parametric counter-parts. In this paper, we propose a non-parametric contextual stochastic search algorithm that can learn a non-parametric search distribution for multiple tasks simultaneously. In difference to existing methods, our method can also learn a context dependent covariance matrix that guides the exploration of the search process. We illustrate its performance on several non-linear contextual tasks.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"198 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132442532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-09DOI: 10.1109/IROS.2016.7759747
M. Mohammadi, A. Franchi, Davide Barcelli, D. Prattichizzo
In this paper, we propose a bilateral tele-operation scheme for cooperative aerial manipulation in which a human operator drives a team of Vertical Take-Off and Landing (VTOL) aerial vehicles, that grasped an object beforehand, and receives a force feedback depending on the states of the system. For application scenarios in which dexterous manipulation by each robot is not necessary, we propose using a rigid tool attached to the vehicle through a passive spherical joint, equipped with a simple adhesive mechanism at the tool-tip that can stick to the grasped object. Having more than two robots, we use the extra degrees of freedom to find the optimal force allocation in term of minimum power and forces smoothness. The human operator commands a desired trajectory for the robot team through a haptic interface to a pose controller, and the output of the pose controller along with system constraints, e.g., VTOL limited forces and contact maintenance, defines the feasible set of forces. Then, an on-line optimization allocates forces by minimizing a cost function of forces and their variation. Finally, propeller thrusts are computed by a dedicated attitude and thrust controller in a decentralized fashion. Human/Hardware in the loop simulation study shows efficiency of the proposed scheme, and the importance of haptic feedback to achieve a better performance.
{"title":"Cooperative aerial tele-manipulation with haptic feedback","authors":"M. Mohammadi, A. Franchi, Davide Barcelli, D. Prattichizzo","doi":"10.1109/IROS.2016.7759747","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759747","url":null,"abstract":"In this paper, we propose a bilateral tele-operation scheme for cooperative aerial manipulation in which a human operator drives a team of Vertical Take-Off and Landing (VTOL) aerial vehicles, that grasped an object beforehand, and receives a force feedback depending on the states of the system. For application scenarios in which dexterous manipulation by each robot is not necessary, we propose using a rigid tool attached to the vehicle through a passive spherical joint, equipped with a simple adhesive mechanism at the tool-tip that can stick to the grasped object. Having more than two robots, we use the extra degrees of freedom to find the optimal force allocation in term of minimum power and forces smoothness. The human operator commands a desired trajectory for the robot team through a haptic interface to a pose controller, and the output of the pose controller along with system constraints, e.g., VTOL limited forces and contact maintenance, defines the feasible set of forces. Then, an on-line optimization allocates forces by minimizing a cost function of forces and their variation. Finally, propeller thrusts are computed by a dedicated attitude and thrust controller in a decentralized fashion. Human/Hardware in the loop simulation study shows efficiency of the proposed scheme, and the importance of haptic feedback to achieve a better performance.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"88 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134093129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-09DOI: 10.1109/IROS.2016.7759083
Joseph Mirabel, S. Tonneau, Pierre Fernbach, A. Seppala, Mylène Campana, N. Mansard, F. Lamiraux
We present HPP, a software designed for complex classes of motion planning problems, such as navigation among movable objects, manipulation, contact-rich multiped locomotion, or elastic rods in cluttered environments. HPP is an open-source answer to the lack of a standard framework for these important issues for robotics and graphics communities.
{"title":"HPP: A new software for constrained motion planning","authors":"Joseph Mirabel, S. Tonneau, Pierre Fernbach, A. Seppala, Mylène Campana, N. Mansard, F. Lamiraux","doi":"10.1109/IROS.2016.7759083","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759083","url":null,"abstract":"We present HPP, a software designed for complex classes of motion planning problems, such as navigation among movable objects, manipulation, contact-rich multiped locomotion, or elastic rods in cluttered environments. HPP is an open-source answer to the lack of a standard framework for these important issues for robotics and graphics communities.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125371254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-09DOI: 10.1109/IROS.2016.7759508
Florian Lier, Marc Hanheide, L. Natale, Simon Schulz, Jonathan Weisz, S. Wachsmuth, S. Wrede
Even though research on autonomous robots and human-robot interaction accomplished great progress in recent years, and reusable soft- and hardware components are available, many of the reported findings are only hardly reproducible by fellow scientists. Usually, reproducibility is impeded because required information, such as the specification of software versions and their configuration, required data sets, and experiment protocols are not mentioned or referenced in most publications. In order to address these issues, we recently introduced an integrated tool chain and its underlying development process to facilitate reproducibility in robotics. In this contribution we instantiate the complete tool chain in a unique user study in order to assess its applicability and usability. To this end, we chose three different robotic systems from independent institutions and modeled them in our tool chain, including three exemplary experiments. Subsequently, we asked twelve researchers to reproduce one of the formerly unknown systems and the associated experiment. We show that all twelve scientists were able to replicate a formerly unknown robotics experiment using our tool chain.
{"title":"Towards automated system and experiment reproduction in robotics","authors":"Florian Lier, Marc Hanheide, L. Natale, Simon Schulz, Jonathan Weisz, S. Wachsmuth, S. Wrede","doi":"10.1109/IROS.2016.7759508","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759508","url":null,"abstract":"Even though research on autonomous robots and human-robot interaction accomplished great progress in recent years, and reusable soft- and hardware components are available, many of the reported findings are only hardly reproducible by fellow scientists. Usually, reproducibility is impeded because required information, such as the specification of software versions and their configuration, required data sets, and experiment protocols are not mentioned or referenced in most publications. In order to address these issues, we recently introduced an integrated tool chain and its underlying development process to facilitate reproducibility in robotics. In this contribution we instantiate the complete tool chain in a unique user study in order to assess its applicability and usability. To this end, we chose three different robotic systems from independent institutions and modeled them in our tool chain, including three exemplary experiments. Subsequently, we asked twelve researchers to reproduce one of the formerly unknown systems and the associated experiment. We show that all twelve scientists were able to replicate a formerly unknown robotics experiment using our tool chain.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115943973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}