Pub Date : 2016-10-01DOI: 10.1109/IROS.2016.7759500
Anusha Nagabandi, Liyu Wang, R. Fearing
Ribbon folding is a new approach to structure formation that forms higher dimensional structures using a lower dimensional primitive, namely a ribbon. In this paper, we present a novel algorithm to address path planning for ribbon folding of multi-link planar structures. We first represent the desired structure with a graph-based representation of edges and nodes. We then use graph theory to claim that for any object which is represented by a connected graph, there exists a continuous path which visits all of its edges. Finally, we develop a path planning algorithm that takes into account the physical constraints of the folding machine. The input is the desired planar structure, and the output is the optimal sequence of ribbon folds for creating that structure using the minimum number of folds. The results of this algorithm are successfully used to fold various planar structures.
{"title":"A path planning algorithm for single-ended continuous planar robotic ribbon folding","authors":"Anusha Nagabandi, Liyu Wang, R. Fearing","doi":"10.1109/IROS.2016.7759500","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759500","url":null,"abstract":"Ribbon folding is a new approach to structure formation that forms higher dimensional structures using a lower dimensional primitive, namely a ribbon. In this paper, we present a novel algorithm to address path planning for ribbon folding of multi-link planar structures. We first represent the desired structure with a graph-based representation of edges and nodes. We then use graph theory to claim that for any object which is represented by a connected graph, there exists a continuous path which visits all of its edges. Finally, we develop a path planning algorithm that takes into account the physical constraints of the folding machine. The input is the desired planar structure, and the output is the optimal sequence of ribbon folds for creating that structure using the minimum number of folds. The results of this algorithm are successfully used to fold various planar structures.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125328068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IROS.2016.7759579
Michihisa Hiratsuka, Ndivhuwo Makondo, Benjamin Rosman, O. Hasegawa
This work proposes a framework that enables arbitrary robots with unknown kinematics models to imitate human demonstrations to acquire a skill, and reproduce it in real-time. The diversity of robots active in non-laboratory environments is growing constantly, and to this end we present an approach for users to be able to easily teach a skill to a robot with any body configuration. Our proposed method requires a motion trajectory obtained from human demonstrations via a Kinect sensor, which is then projected onto a corresponding human skeleton model. The kinematics mapping between the robot and the human model is learned by employing Local Procrustes Analysis, which enables the transfer of the demonstrated trajectory from the human model to the robot. Finally, the transferred trajectory is modeled using Dynamic Movement Primitives, allowing it to be reproduced in real time. Experiments in simulation on a 4 degree of freedom robot show that our method is able to correctly imitate various skills demonstrated by a human.
{"title":"Trajectory learning from human demonstrations via manifold mapping","authors":"Michihisa Hiratsuka, Ndivhuwo Makondo, Benjamin Rosman, O. Hasegawa","doi":"10.1109/IROS.2016.7759579","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759579","url":null,"abstract":"This work proposes a framework that enables arbitrary robots with unknown kinematics models to imitate human demonstrations to acquire a skill, and reproduce it in real-time. The diversity of robots active in non-laboratory environments is growing constantly, and to this end we present an approach for users to be able to easily teach a skill to a robot with any body configuration. Our proposed method requires a motion trajectory obtained from human demonstrations via a Kinect sensor, which is then projected onto a corresponding human skeleton model. The kinematics mapping between the robot and the human model is learned by employing Local Procrustes Analysis, which enables the transfer of the demonstrated trajectory from the human model to the robot. Finally, the transferred trajectory is modeled using Dynamic Movement Primitives, allowing it to be reproduced in real time. Experiments in simulation on a 4 degree of freedom robot show that our method is able to correctly imitate various skills demonstrated by a human.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114894424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IROS.2016.7759451
Luenin Barrios, T. Collins, Robert Kovac, Wei-Min Shen
Aggregation of self-reconfigurable robotic modules can potentially offer many advantages for robotic locomotion and manipulation. The resulting system could be more reliable and fault-tolerant and provide the necessary flexibility for new tasks and environments. However, self-aggregation of modules is a challenging task, especially when the alignment of the docking parties in a 3D environment involves both position and orientation (6D), since the bases of docking may be non-stationary (e.g., floating in space, underwater, or moving along the ground), and the end-effectors may have accumulated uncertainties due to many dynamically-established connections between modules. This paper presents a new framework for docking in such a context and describes a solution for sensor-guided self-reconfiguration and manipulation with non-fixed bases. The main contributions of the paper include a realistic experiment setting for 6D docking where a modular manipulator is floating or rotating in space with a reaction wheel and searches and docks with a target module using vision. The movement of the docking parties is a combination of floating and manipulation, and the precision of the docking is guided by a sensor located at the tip of the docking interface. The docking itself is planned and executed by a real-time algorithm with a theoretical convergence boundary. This new framework has been tested in a high-fidelity physics-based simulator, as well as by real robotic modules based on SuperBot. Experimental results have shown an average success rate of more than 86.7 percent in a variety of different 6D-docking scenarios.
{"title":"Autonomous 6D-docking and manipulation with non-stationary-base using self-reconfigurable modular robots","authors":"Luenin Barrios, T. Collins, Robert Kovac, Wei-Min Shen","doi":"10.1109/IROS.2016.7759451","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759451","url":null,"abstract":"Aggregation of self-reconfigurable robotic modules can potentially offer many advantages for robotic locomotion and manipulation. The resulting system could be more reliable and fault-tolerant and provide the necessary flexibility for new tasks and environments. However, self-aggregation of modules is a challenging task, especially when the alignment of the docking parties in a 3D environment involves both position and orientation (6D), since the bases of docking may be non-stationary (e.g., floating in space, underwater, or moving along the ground), and the end-effectors may have accumulated uncertainties due to many dynamically-established connections between modules. This paper presents a new framework for docking in such a context and describes a solution for sensor-guided self-reconfiguration and manipulation with non-fixed bases. The main contributions of the paper include a realistic experiment setting for 6D docking where a modular manipulator is floating or rotating in space with a reaction wheel and searches and docks with a target module using vision. The movement of the docking parties is a combination of floating and manipulation, and the precision of the docking is guided by a sensor located at the tip of the docking interface. The docking itself is planned and executed by a real-time algorithm with a theoretical convergence boundary. This new framework has been tested in a high-fidelity physics-based simulator, as well as by real robotic modules based on SuperBot. Experimental results have shown an average success rate of more than 86.7 percent in a variety of different 6D-docking scenarios.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115050393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IROS.2016.7759637
A. Mahoney, P. L. Anderson, P. Swaney, Fabien Maldonado, R. Webster
We propose a new class of robotic device for minimally-invasive surgery that lies at the intersection of continuum, parallel, and reconfigurable robotics. This Continuum Reconfigurable Incisionless Surgical Parallel (CRISP) paradigm involves the use of multiple needle-diameter devices inserted through the skin and assembled into parallel structures inside the body. The parallel structure can be reconfigured inside the patient's body to satisfy changing task requirements such as reaching initially inaccessible locations or modifying mechanical stiffness for manipulation or palpation. Another potential advantage of the CRISP concept is that many small (needle-sized) entry points into the patient may be preferable in terms of both patient healing and cosmesis to the single (or multiple) larger ports needed to admit current surgical robots. This paper presents a mechanics-based model for CRISP forward and inverse kinematics, along with experimental validation.
{"title":"Reconfigurable parallel continuum robots for incisionless surgery","authors":"A. Mahoney, P. L. Anderson, P. Swaney, Fabien Maldonado, R. Webster","doi":"10.1109/IROS.2016.7759637","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759637","url":null,"abstract":"We propose a new class of robotic device for minimally-invasive surgery that lies at the intersection of continuum, parallel, and reconfigurable robotics. This Continuum Reconfigurable Incisionless Surgical Parallel (CRISP) paradigm involves the use of multiple needle-diameter devices inserted through the skin and assembled into parallel structures inside the body. The parallel structure can be reconfigured inside the patient's body to satisfy changing task requirements such as reaching initially inaccessible locations or modifying mechanical stiffness for manipulation or palpation. Another potential advantage of the CRISP concept is that many small (needle-sized) entry points into the patient may be preferable in terms of both patient healing and cosmesis to the single (or multiple) larger ports needed to admit current surgical robots. This paper presents a mechanics-based model for CRISP forward and inverse kinematics, along with experimental validation.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"165 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115546280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IROS.2016.7759794
M. Maximo, Carlos H. C. Ribeiro, R. J. Afonso
This paper presents a mixed-integer model predictive controller for walking. In the proposed scheme, mixed-integer quadratic programs (MIQP) are solved online to simultaneously decide center of mass jerks, footsteps positions and steps durations while respecting actuation, geometry, and contact constraints. Simulation results show that this MIQP scheme is able to keep balance while a fixed step duration controller fails in situations where the robot faces very strong disturbances.
{"title":"Mixed-integer programming for automatic walking step duration","authors":"M. Maximo, Carlos H. C. Ribeiro, R. J. Afonso","doi":"10.1109/IROS.2016.7759794","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759794","url":null,"abstract":"This paper presents a mixed-integer model predictive controller for walking. In the proposed scheme, mixed-integer quadratic programs (MIQP) are solved online to simultaneously decide center of mass jerks, footsteps positions and steps durations while respecting actuation, geometry, and contact constraints. Simulation results show that this MIQP scheme is able to keep balance while a fixed step duration controller fails in situations where the robot faces very strong disturbances.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116016561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IROS.2016.7759054
Benjamin Hepp, Tobias Naegeli, Otmar Hilliges
We present a tracking system based on ultra-wideband (UWB) radio tranceivers mounted on a robot and a target. In comparison to typical UWB localization systems with fixed UWB tranceivers in the environment we only require instrumentation of the target with a single UWB tranceiver. Our system works in GPS-denied environments and does not suffer from long-term drift and limited fields of view. This paper reports the localization algorithm and implementation details. Additionally, we demonstrate a quantitative evaluation of the accuracy (10cm average position error for a square with side-length of 4m) and application scenarios with a quadrotor flying in close proximity to a person and handling occlusion of the target. Finally, we release our implementation as open-source software.
{"title":"Omni-directional person tracking on a flying robot using occlusion-robust ultra-wideband signals","authors":"Benjamin Hepp, Tobias Naegeli, Otmar Hilliges","doi":"10.1109/IROS.2016.7759054","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759054","url":null,"abstract":"We present a tracking system based on ultra-wideband (UWB) radio tranceivers mounted on a robot and a target. In comparison to typical UWB localization systems with fixed UWB tranceivers in the environment we only require instrumentation of the target with a single UWB tranceiver. Our system works in GPS-denied environments and does not suffer from long-term drift and limited fields of view. This paper reports the localization algorithm and implementation details. Additionally, we demonstrate a quantitative evaluation of the accuracy (10cm average position error for a square with side-length of 4m) and application scenarios with a quadrotor flying in close proximity to a person and handling occlusion of the target. Finally, we release our implementation as open-source software.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116381033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IROS.2016.7759544
Niclas Evestedt, Oskar Ljungqvist, Daniel Axehill
Reversing with a dolly steered trailer configuration is a hard task for any driver without extensive training. In this work we present a motion planning and control framework that can be used to automatically plan and execute complicated manoeuvres. The unstable dynamics of the reversing general 2-trailer configuration with off-axle hitching is first stabilised by an LQ-controller and then a pure pursuit path tracker is used on a higher level giving a cascaded controller that can track piecewise linear reference paths. This controller together with a kinematic model of the trailer configuration is then used for forward simulations within a Closed-Loop Rapidly Exploring Random Tree framework to generate motion plans that are not only kinematically feasible but also include the limitations of the controller's tracking performance when reversing. The approach is evaluated over a series of Monte Carlo simulations on three different scenarios and impressive success rates are achieved. Finally the approach is successfully tested on a small scale test platform where the motion plan is calculated and then sent to the platform for execution.
{"title":"Motion planning for a reversing general 2-trailer configuration using Closed-Loop RRT","authors":"Niclas Evestedt, Oskar Ljungqvist, Daniel Axehill","doi":"10.1109/IROS.2016.7759544","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759544","url":null,"abstract":"Reversing with a dolly steered trailer configuration is a hard task for any driver without extensive training. In this work we present a motion planning and control framework that can be used to automatically plan and execute complicated manoeuvres. The unstable dynamics of the reversing general 2-trailer configuration with off-axle hitching is first stabilised by an LQ-controller and then a pure pursuit path tracker is used on a higher level giving a cascaded controller that can track piecewise linear reference paths. This controller together with a kinematic model of the trailer configuration is then used for forward simulations within a Closed-Loop Rapidly Exploring Random Tree framework to generate motion plans that are not only kinematically feasible but also include the limitations of the controller's tracking performance when reversing. The approach is evaluated over a series of Monte Carlo simulations on three different scenarios and impressive success rates are achieved. Finally the approach is successfully tested on a small scale test platform where the motion plan is calculated and then sent to the platform for execution.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"325 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116439835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IROS.2016.7759044
A. Tamjidi, S. Chakravorty, Dylan A. Shell
This paper presents a new recursive information consensus filter for decentralized dynamic-state estimation. Local estimators are assumed to have access only to local information and no structure is assumed about the topology of the communication network, which need not be connected at all times. Iterative Covariance Intersection (ICI) is used to reach consensus over priors which might become correlated, while consensus over new information is handled using weights based on a Metropolis Hastings Markov Chain (MHMC). We establish bounds for estimation performance and show that our method produces unbiased conservative estimates that are better than CI. The performance of the proposed method is evaluated and compared with competing algorithms on an atmospheric dispersion problem.
{"title":"Unifying consensus and covariance intersection for decentralized state estimation","authors":"A. Tamjidi, S. Chakravorty, Dylan A. Shell","doi":"10.1109/IROS.2016.7759044","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759044","url":null,"abstract":"This paper presents a new recursive information consensus filter for decentralized dynamic-state estimation. Local estimators are assumed to have access only to local information and no structure is assumed about the topology of the communication network, which need not be connected at all times. Iterative Covariance Intersection (ICI) is used to reach consensus over priors which might become correlated, while consensus over new information is handled using weights based on a Metropolis Hastings Markov Chain (MHMC). We establish bounds for estimation performance and show that our method produces unbiased conservative estimates that are better than CI. The performance of the proposed method is evaluated and compared with competing algorithms on an atmospheric dispersion problem.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122350159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IROS.2016.7759057
Wenzhen Yuan, M. Srinivasan, E. Adelson
Hardness sensing is a valuable capability for a robot touch sensor. We describe a novel method of hardness sensing that does not require accurate control of contact conditions. A GelSight sensor is a tactile sensor that provides high resolution tactile images, which enables a robot to infer object properties such as geometry and fine texture, as well as contact force and slip conditions. The sensor is pressed on silicone samples by a human or a robot and we measure the sample hardness only with data from the sensor, without a separate force sensor and without precise knowledge of the contact trajectory. We describe the features that show object hardness. For hemispherical objects, we develop a model to measure the sample hardness, and the estimation error is about 4% in the range of 8 Shore 00 to 45 Shore A. With this technology, a robot is able to more easily infer the hardness of the touched objects, thereby improving its object recognition as well as manipulation strategy.
{"title":"Estimating object hardness with a GelSight touch sensor","authors":"Wenzhen Yuan, M. Srinivasan, E. Adelson","doi":"10.1109/IROS.2016.7759057","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759057","url":null,"abstract":"Hardness sensing is a valuable capability for a robot touch sensor. We describe a novel method of hardness sensing that does not require accurate control of contact conditions. A GelSight sensor is a tactile sensor that provides high resolution tactile images, which enables a robot to infer object properties such as geometry and fine texture, as well as contact force and slip conditions. The sensor is pressed on silicone samples by a human or a robot and we measure the sample hardness only with data from the sensor, without a separate force sensor and without precise knowledge of the contact trajectory. We describe the features that show object hardness. For hemispherical objects, we develop a model to measure the sample hardness, and the estimation error is about 4% in the range of 8 Shore 00 to 45 Shore A. With this technology, a robot is able to more easily infer the hardness of the touched objects, thereby improving its object recognition as well as manipulation strategy.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"289 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122980131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IROS.2016.7759628
Firas Abi-Farraj, Nicolò Pedemonte, P. Giordano
Cleaning up the past half century of nuclear waste represents the largest environmental remediation project in the whole Europe. Nuclear waste must be sorted, segregated and stored according to its radiation level in order to optimize maintenance costs. The objective of this work is to develop a shared control framework for remote manipulation of objects using visual information. In the presented scenario, the human operator must control a system composed of two robotic arms, one equipped with a gripper and the other one with a camera. In order to facilitate the operator's task, a subset of the gripper motion are assumed to be regulated by an autonomous algorithm exploiting the camera view of the scene. At the same time, the operator has control over the remaining null-space motions w.r.t. the primary (autonomous) task by acting on a force feedback device. A novel force feedback algorithm is also proposed with the aim of informing the user about possible constraints of the robotic system such as, for instance, joint limits. Human/hardware-in-the-loop experiments with simulated slave robots and a real master device are finally reported for demonstrating the feasibility and effectiveness of the approach.
{"title":"A visual-based shared control architecture for remote telemanipulation","authors":"Firas Abi-Farraj, Nicolò Pedemonte, P. Giordano","doi":"10.1109/IROS.2016.7759628","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759628","url":null,"abstract":"Cleaning up the past half century of nuclear waste represents the largest environmental remediation project in the whole Europe. Nuclear waste must be sorted, segregated and stored according to its radiation level in order to optimize maintenance costs. The objective of this work is to develop a shared control framework for remote manipulation of objects using visual information. In the presented scenario, the human operator must control a system composed of two robotic arms, one equipped with a gripper and the other one with a camera. In order to facilitate the operator's task, a subset of the gripper motion are assumed to be regulated by an autonomous algorithm exploiting the camera view of the scene. At the same time, the operator has control over the remaining null-space motions w.r.t. the primary (autonomous) task by acting on a force feedback device. A novel force feedback algorithm is also proposed with the aim of informing the user about possible constraints of the robotic system such as, for instance, joint limits. Human/hardware-in-the-loop experiments with simulated slave robots and a real master device are finally reported for demonstrating the feasibility and effectiveness of the approach.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114602710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}