Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9197291
Danny Driess, Ozgur S. Oguz, Jung-Su Ha, M. Toussaint
In this paper, we propose a deep neural network that predicts the feasibility of a mixed-integer program from visual input for robot manipulation planning. Integrating learning into task and motion planning is challenging, since it is unclear how the scene and goals can be encoded as input to the learning algorithm in a way that enables to generalize over a variety of tasks in environments with changing numbers of objects and goals. To achieve this, we propose to encode the scene and the target object directly in the image space.Our experiments show that our proposed network generalizes to scenes with multiple objects, although during training only two objects are present at the same time. By using the learned network as a heuristic to guide the search over the discrete variables of the mixed-integer program, the number of optimization problems that have to be solved to find a feasible solution or to detect infeasibility can greatly be reduced.
{"title":"Deep Visual Heuristics: Learning Feasibility of Mixed-Integer Programs for Manipulation Planning","authors":"Danny Driess, Ozgur S. Oguz, Jung-Su Ha, M. Toussaint","doi":"10.1109/ICRA40945.2020.9197291","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197291","url":null,"abstract":"In this paper, we propose a deep neural network that predicts the feasibility of a mixed-integer program from visual input for robot manipulation planning. Integrating learning into task and motion planning is challenging, since it is unclear how the scene and goals can be encoded as input to the learning algorithm in a way that enables to generalize over a variety of tasks in environments with changing numbers of objects and goals. To achieve this, we propose to encode the scene and the target object directly in the image space.Our experiments show that our proposed network generalizes to scenes with multiple objects, although during training only two objects are present at the same time. By using the learned network as a heuristic to guide the search over the discrete variables of the mixed-integer program, the number of optimization problems that have to be solved to find a feasible solution or to detect infeasibility can greatly be reduced.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87353587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9196745
Anirud Thyagharajan, O. J. Omer, D. Mandal, S. Subramoney
Sparse-indirect SLAM systems have been dominantly popular due to their computational efficiency and photometric invariance properties. Depth sensors are critical to SLAM frameworks for providing scale information to the 3D world, yet known to be plagued by a wide variety of noise sources, possessing lateral and axial components. In this work, we demonstrate the detrimental impact of these depth noise components on the performance of the state-of-the-art sparse-indirect SLAM system (ORB-SLAM2). We propose (i) Map-Point Consensus based Outlier Rejection (MC-OR) to counter lateral noise, and (ii) Adaptive Virtual Camera (AVC) to combat axial noise accurately. MC-OR utilizes consensus information between multiple sightings of the same landmark to disambiguate noisy depth and filter it out before pose optimization. In AVC, we introduce an error vector as an accurate representation of the axial depth error. We additionally propose an adaptive algorithm to find the virtual camera location for projecting the error used in the objective function of the pose optimization. Our techniques work equally well for stereo image pairs and RGB-D input directly used by sparse-indirect SLAM systems. Our methods were tested on the TUM (RGB-D) and EuRoC (stereo) datasets and we show that they outperform existing state-of-the-art ORB-SLAM2 by 2-3x, especially in sequences critically affected by depth noise.
{"title":"Towards Noise Resilient SLAM","authors":"Anirud Thyagharajan, O. J. Omer, D. Mandal, S. Subramoney","doi":"10.1109/ICRA40945.2020.9196745","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196745","url":null,"abstract":"Sparse-indirect SLAM systems have been dominantly popular due to their computational efficiency and photometric invariance properties. Depth sensors are critical to SLAM frameworks for providing scale information to the 3D world, yet known to be plagued by a wide variety of noise sources, possessing lateral and axial components. In this work, we demonstrate the detrimental impact of these depth noise components on the performance of the state-of-the-art sparse-indirect SLAM system (ORB-SLAM2). We propose (i) Map-Point Consensus based Outlier Rejection (MC-OR) to counter lateral noise, and (ii) Adaptive Virtual Camera (AVC) to combat axial noise accurately. MC-OR utilizes consensus information between multiple sightings of the same landmark to disambiguate noisy depth and filter it out before pose optimization. In AVC, we introduce an error vector as an accurate representation of the axial depth error. We additionally propose an adaptive algorithm to find the virtual camera location for projecting the error used in the objective function of the pose optimization. Our techniques work equally well for stereo image pairs and RGB-D input directly used by sparse-indirect SLAM systems. Our methods were tested on the TUM (RGB-D) and EuRoC (stereo) datasets and we show that they outperform existing state-of-the-art ORB-SLAM2 by 2-3x, especially in sequences critically affected by depth noise.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80607052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9197159
Jinyoung Choi, C. Dance, Jung-Eun Kim, Kyungsik Park, Jaehun Han, Joonho Seo, Minsu Kim
Deep reinforcement learning (RL) is being actively studied for robot navigation due to its promise of superior performance and robustness. However, most existing deep RL navigation agents are trained using fixed parameters, such as maximum velocities and weightings of reward components. Since the optimal choice of parameters depends on the use-case, it can be difficult to deploy such existing methods in a variety of real-world service scenarios. In this paper, we propose a novel deep RL navigation method that can adapt its policy to a wide range of parameters and reward functions without expensive retraining. Additionally, we explore a Bayesian deep learning method to optimize these parameters that requires only a small amount of preference data. We empirically show that our method can learn diverse navigation skills and quickly adapt its policy to a given performance metric or to human preference. We also demonstrate our method in real-world scenarios.
{"title":"Fast Adaptation of Deep Reinforcement Learning-Based Navigation Skills to Human Preference","authors":"Jinyoung Choi, C. Dance, Jung-Eun Kim, Kyungsik Park, Jaehun Han, Joonho Seo, Minsu Kim","doi":"10.1109/ICRA40945.2020.9197159","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197159","url":null,"abstract":"Deep reinforcement learning (RL) is being actively studied for robot navigation due to its promise of superior performance and robustness. However, most existing deep RL navigation agents are trained using fixed parameters, such as maximum velocities and weightings of reward components. Since the optimal choice of parameters depends on the use-case, it can be difficult to deploy such existing methods in a variety of real-world service scenarios. In this paper, we propose a novel deep RL navigation method that can adapt its policy to a wide range of parameters and reward functions without expensive retraining. Additionally, we explore a Bayesian deep learning method to optimize these parameters that requires only a small amount of preference data. We empirically show that our method can learn diverse navigation skills and quickly adapt its policy to a given performance metric or to human preference. We also demonstrate our method in real-world scenarios.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80652815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9196574
Meibao Yao, Xueming Xiao, Yang Tian, Hutao Cui, J. Paik
This paper presents a novel approach to fault tolerant reconfiguration of modular self-folding robots. Among various types of faults that probably occur in the modular system, we focus on the tolerance of complete actuation failure of active modules that might cause imprecise robotic motion and even reconfiguration failure. Our approach is to utilize the reconfigurability of modular self-folding robots and investigate intra-module connection to determine initial patterns that are inherently fault tolerant. We exploit the redundancy of actuation and distribute active modules in both layout-based and target-based scenarios, such that reconfiguration schemes with user-specified fault tolerant capability can be generated for an arbitrary input initial pattern or 3D configuration. Our methods are demonstrated in computer-aided simulation on the robotic platform of Mori, a modular origami robot. The simulation results validate that the proposed algorithms yield fault tolerant initial patterns and distribution schemes of active modules for several 2D and 3D configurations with Mori, while retaining generalizability for a large number of modular self-folding robots.
{"title":"An Actuation Fault Tolerance Approach to Reconfiguration Planning of Modular Self-folding Robots","authors":"Meibao Yao, Xueming Xiao, Yang Tian, Hutao Cui, J. Paik","doi":"10.1109/ICRA40945.2020.9196574","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196574","url":null,"abstract":"This paper presents a novel approach to fault tolerant reconfiguration of modular self-folding robots. Among various types of faults that probably occur in the modular system, we focus on the tolerance of complete actuation failure of active modules that might cause imprecise robotic motion and even reconfiguration failure. Our approach is to utilize the reconfigurability of modular self-folding robots and investigate intra-module connection to determine initial patterns that are inherently fault tolerant. We exploit the redundancy of actuation and distribute active modules in both layout-based and target-based scenarios, such that reconfiguration schemes with user-specified fault tolerant capability can be generated for an arbitrary input initial pattern or 3D configuration. Our methods are demonstrated in computer-aided simulation on the robotic platform of Mori, a modular origami robot. The simulation results validate that the proposed algorithms yield fault tolerant initial patterns and distribution schemes of active modules for several 2D and 3D configurations with Mori, while retaining generalizability for a large number of modular self-folding robots.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80667277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9197269
Yash Mulgaonkar, Wenxin Liu, Dinesh Thakur, Kostas Daniilidis, C. J. Taylor, Vijay R. Kumar
Autonomous flight through unknown environments in the presence of obstacles is a challenging problem for micro aerial vehicles (MAVs). A majority of the current state-of-art research assumes obstacles as opaque objects that can be easily sensed by optical sensors such as cameras or LiDARs. However in indoor environments with glass walls and windows, or scenarios with smoke and dust, robots (even birds) have a difficult time navigating through the unknown space.In this paper, we present the design of a new class of micro aerial vehicles that achieves autonomous navigation and are robust to collisions. In particular, we present the Tiercel MAV: a small, agile, light weight and collision-resilient robot powered by a cellphone grade CPU. Our design exploits contact to infer the presence of transparent or reflective obstacles like glass walls, integrating touch with visual perception for SLAM. The Tiercel is able to localize using visual-inertial odometry (VIO) running on board the robot with a single downward facing fisheye camera and an IMU. We show how our collision detector design and experimental set up enable us to characterize the impact of collisions on VIO. We further develop a planning strategy to enable the Tiercel to fly autonomously in an unknown space, sustaining collisions and creating a 2D map of the environment. Finally we demonstrate a swarm of three autonomous Tiercel robots safely navigating and colliding through an obstacle field to reach their objectives.
{"title":"The Tiercel: A novel autonomous micro aerial vehicle that can map the environment by flying into obstacles","authors":"Yash Mulgaonkar, Wenxin Liu, Dinesh Thakur, Kostas Daniilidis, C. J. Taylor, Vijay R. Kumar","doi":"10.1109/ICRA40945.2020.9197269","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197269","url":null,"abstract":"Autonomous flight through unknown environments in the presence of obstacles is a challenging problem for micro aerial vehicles (MAVs). A majority of the current state-of-art research assumes obstacles as opaque objects that can be easily sensed by optical sensors such as cameras or LiDARs. However in indoor environments with glass walls and windows, or scenarios with smoke and dust, robots (even birds) have a difficult time navigating through the unknown space.In this paper, we present the design of a new class of micro aerial vehicles that achieves autonomous navigation and are robust to collisions. In particular, we present the Tiercel MAV: a small, agile, light weight and collision-resilient robot powered by a cellphone grade CPU. Our design exploits contact to infer the presence of transparent or reflective obstacles like glass walls, integrating touch with visual perception for SLAM. The Tiercel is able to localize using visual-inertial odometry (VIO) running on board the robot with a single downward facing fisheye camera and an IMU. We show how our collision detector design and experimental set up enable us to characterize the impact of collisions on VIO. We further develop a planning strategy to enable the Tiercel to fly autonomously in an unknown space, sustaining collisions and creating a 2D map of the environment. Finally we demonstrate a swarm of three autonomous Tiercel robots safely navigating and colliding through an obstacle field to reach their objectives.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82026179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9197265
Xinyi Guo, Zhichao Ma, R. Goyal, Moonkwang Jeong, W. Pang, P. Fischer, X. Duan, T. Qiu
Non-contact manipulation is of great importance in the actuation of micro-robotics. It is challenging to contactless manipulate micro-scale objects over large spatial distance in fluid. Here, we describe a novel approach for the dynamic position control of microparticles in three-dimensional (3D) space, based on high-speed acoustic streaming generated by a micro-fabricated gigahertz transducer. The hydrodynamic force generated by the streaming flow field has a vertical component against gravity and a lateral component towards the center, thus the microparticle is able to be stably trapped at a position far from the transducer surface, and to be manipulated over centimeter distance in 3D. Only the hydrodynamic force is utilized in the system for particle manipulation, making it a versatile tool regardless the material properties of the trapped particle. The system shows high reliability and manipulation velocity, revealing its potentials for the applications in robotics and automation at small scales.
{"title":"Acoustofluidic Tweezers for the 3D Manipulation of Microparticles","authors":"Xinyi Guo, Zhichao Ma, R. Goyal, Moonkwang Jeong, W. Pang, P. Fischer, X. Duan, T. Qiu","doi":"10.1109/ICRA40945.2020.9197265","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197265","url":null,"abstract":"Non-contact manipulation is of great importance in the actuation of micro-robotics. It is challenging to contactless manipulate micro-scale objects over large spatial distance in fluid. Here, we describe a novel approach for the dynamic position control of microparticles in three-dimensional (3D) space, based on high-speed acoustic streaming generated by a micro-fabricated gigahertz transducer. The hydrodynamic force generated by the streaming flow field has a vertical component against gravity and a lateral component towards the center, thus the microparticle is able to be stably trapped at a position far from the transducer surface, and to be manipulated over centimeter distance in 3D. Only the hydrodynamic force is utilized in the system for particle manipulation, making it a versatile tool regardless the material properties of the trapped particle. The system shows high reliability and manipulation velocity, revealing its potentials for the applications in robotics and automation at small scales.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90685247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9196672
Ayan Dutta, Anirban Ghosh, Stephen Sisley, O. P. Kreidl
To achieve coordination in a multi-robot system, the robots typically resort to some form of communication among each other. In most of the multi-robot coordination frameworks, high-level coordination strategies are studied but ‘how’ the ground-level communication takes place, is assumed to be taken care of by another program. In this paper, we study the communication routing problem for large multi-robot systems where the robots have limited communication ranges. The objective is to send a message from a robot to another in the network, routed through a low number of other robots. To this end, we propose a communication model between any pair of robots using peer-to-peer radio communication. Our proposed model is generic to any type of message and guarantees a low hop routing between any pair of robots in this network. These help the robots to exchange large messages (e.g., multi-spectral images) in a short amount of time. Results show that our proposed approach easily scales up to 1000 robots while drastically reducing the space complexity for maintaining the network information.
{"title":"Efficient Communication in Large Multi-robot Networks","authors":"Ayan Dutta, Anirban Ghosh, Stephen Sisley, O. P. Kreidl","doi":"10.1109/ICRA40945.2020.9196672","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196672","url":null,"abstract":"To achieve coordination in a multi-robot system, the robots typically resort to some form of communication among each other. In most of the multi-robot coordination frameworks, high-level coordination strategies are studied but ‘how’ the ground-level communication takes place, is assumed to be taken care of by another program. In this paper, we study the communication routing problem for large multi-robot systems where the robots have limited communication ranges. The objective is to send a message from a robot to another in the network, routed through a low number of other robots. To this end, we propose a communication model between any pair of robots using peer-to-peer radio communication. Our proposed model is generic to any type of message and guarantees a low hop routing between any pair of robots in this network. These help the robots to exchange large messages (e.g., multi-spectral images) in a short amount of time. Results show that our proposed approach easily scales up to 1000 robots while drastically reducing the space complexity for maintaining the network information.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91023206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9196722
Rasha Sheikh, Andres Milioto, Philipp Lottes, C. Stachniss, Maren Bennewitz, T. Schultz
Annotated datasets are essential for supervised learning. However, annotating large datasets is a tedious and time-intensive task. This paper addresses active learning in the context of semantic segmentation with the goal of reducing the human labeling effort. Our application is agricultural robotics and we focus on the task of distinguishing between crop and weed plants from image data. A key challenge in this application is the transfer of an existing semantic segmentation CNN to a new field, in which growth stage, weeds, soil, and weather conditions differ. We propose a novel approach that, given a trained model on one field together with rough foreground segmentation, refines the network on a substantially different field providing an effective method of selecting samples to annotate for supporting the transfer. We evaluated our approach on two challenging datasets from the agricultural robotics domain and show that we achieve a higher accuracy with a smaller number of samples compared to random sampling as well as entropy based sampling, which consequently reduces the required human labeling effort.
{"title":"Gradient and Log-based Active Learning for Semantic Segmentation of Crop and Weed for Agricultural Robots","authors":"Rasha Sheikh, Andres Milioto, Philipp Lottes, C. Stachniss, Maren Bennewitz, T. Schultz","doi":"10.1109/ICRA40945.2020.9196722","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196722","url":null,"abstract":"Annotated datasets are essential for supervised learning. However, annotating large datasets is a tedious and time-intensive task. This paper addresses active learning in the context of semantic segmentation with the goal of reducing the human labeling effort. Our application is agricultural robotics and we focus on the task of distinguishing between crop and weed plants from image data. A key challenge in this application is the transfer of an existing semantic segmentation CNN to a new field, in which growth stage, weeds, soil, and weather conditions differ. We propose a novel approach that, given a trained model on one field together with rough foreground segmentation, refines the network on a substantially different field providing an effective method of selecting samples to annotate for supporting the transfer. We evaluated our approach on two challenging datasets from the agricultural robotics domain and show that we achieve a higher accuracy with a smaller number of samples compared to random sampling as well as entropy based sampling, which consequently reduces the required human labeling effort.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91099625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9197381
Michael T. Ohradzansky, Andrew B. Mills, Eugene R. Rush, Danny G. Riley, E. Frew, J. Humbert
Autonomous navigation in unknown environments with the intent of exploring all traversable areas is a significant challenge for robotic platforms. In this paper, a simple yet reliable method for exploring unknown environments is presented based on bio-inspired reactive control and metric-topological planning. The reactive control algorithm is modeled after the spatial decomposition of wide and small-field patterns of optic flow in the insect visuomotor system. Centering behaviour and small obstacle detection and avoidance are achieved through wide-field integration and Fourier residual analysis of instantaneous measured nearness respectively. A topological graph is estimated using image processing techniques on a continuous occupancy grid. Node paths are rapidly generated to navigate to the nearest unexplored edge in the graph. It is shown through rigorous field-testing that the proposed control and planning method is robust, reliable, and computationally efficient.
{"title":"Reactive Control and Metric-Topological Planning for Exploration","authors":"Michael T. Ohradzansky, Andrew B. Mills, Eugene R. Rush, Danny G. Riley, E. Frew, J. Humbert","doi":"10.1109/ICRA40945.2020.9197381","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197381","url":null,"abstract":"Autonomous navigation in unknown environments with the intent of exploring all traversable areas is a significant challenge for robotic platforms. In this paper, a simple yet reliable method for exploring unknown environments is presented based on bio-inspired reactive control and metric-topological planning. The reactive control algorithm is modeled after the spatial decomposition of wide and small-field patterns of optic flow in the insect visuomotor system. Centering behaviour and small obstacle detection and avoidance are achieved through wide-field integration and Fourier residual analysis of instantaneous measured nearness respectively. A topological graph is estimated using image processing techniques on a continuous occupancy grid. Node paths are rapidly generated to navigate to the nearest unexplored edge in the graph. It is shown through rigorous field-testing that the proposed control and planning method is robust, reliable, and computationally efficient.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89848184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9197276
Zhengxin Yang, Lidong Yang, Li Zhang
Magnetic helical microswimmers can be propelled by rotating magnetic field and are adept at passing through narrow space. To date, various magnetic actuation systems and control methods have been developed to drive these microswimmers. However, steering their spacial movement in a large workspace is still challenging, which could be significant for potential medical applications. In this regard, this paper designs an eye-in-hand stereo-vision module and corresponding refraction-rectified location algorithm. Combined with the motor module and the coil module, the mobile-coil system is capable of generating dynamic magnetic fields in a large 3D workspace. Based on the system, a robust triple-loop stereo visual servoing strategy is proposed that operates simultaneous tracking, locating, and steering, through which the helical swimmer is able to follow a long-distance 3D path. A scaled-up magnetic helical swimmer is employed in the path following experiment. Our prototype system reaches a cylindrical workspace with a diameter more than 200 mm, and the mean error of path tracking is less than 2 mm.
{"title":"Eye-in-Hand 3D Visual Servoing of Helical Swimmers Using Parallel Mobile Coils","authors":"Zhengxin Yang, Lidong Yang, Li Zhang","doi":"10.1109/ICRA40945.2020.9197276","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197276","url":null,"abstract":"Magnetic helical microswimmers can be propelled by rotating magnetic field and are adept at passing through narrow space. To date, various magnetic actuation systems and control methods have been developed to drive these microswimmers. However, steering their spacial movement in a large workspace is still challenging, which could be significant for potential medical applications. In this regard, this paper designs an eye-in-hand stereo-vision module and corresponding refraction-rectified location algorithm. Combined with the motor module and the coil module, the mobile-coil system is capable of generating dynamic magnetic fields in a large 3D workspace. Based on the system, a robust triple-loop stereo visual servoing strategy is proposed that operates simultaneous tracking, locating, and steering, through which the helical swimmer is able to follow a long-distance 3D path. A scaled-up magnetic helical swimmer is employed in the path following experiment. Our prototype system reaches a cylindrical workspace with a diameter more than 200 mm, and the mean error of path tracking is less than 2 mm.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89979761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}