Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981662
Andrei Battistel, T. R. Oliveira, Victor Hugo Pereira Rodrigues
Inertially stabilized platforms are subject of interest to different engineering areas, such as telecommunications, robotics and military systems. The objective is to maintain the attitude of a desired object constant despite the movements of a host vehicle. This paper deals with the problem of stabilizing a platform using a two degree of freedom gimbal as mechanical actuator. Mechanical unbalances are considered and a MIMO version of the Binary Model Reference Adaptive Controller is employed. The algorithm employs a newly proposed differentiator based in high-order sliding modes that is global and exact. This differentiator can also be used for monitoring and estimation purposes in robotics systems. Simulation results are presented using as inputs the experimental data acquired from a vehicle going through a circuit with ground obstacles.
{"title":"Adaptive Control of an Unbalanced Two-Axis Gimbal for Application to Inertially Stabilized Platforms","authors":"Andrei Battistel, T. R. Oliveira, Victor Hugo Pereira Rodrigues","doi":"10.1109/ICAR46387.2019.8981662","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981662","url":null,"abstract":"Inertially stabilized platforms are subject of interest to different engineering areas, such as telecommunications, robotics and military systems. The objective is to maintain the attitude of a desired object constant despite the movements of a host vehicle. This paper deals with the problem of stabilizing a platform using a two degree of freedom gimbal as mechanical actuator. Mechanical unbalances are considered and a MIMO version of the Binary Model Reference Adaptive Controller is employed. The algorithm employs a newly proposed differentiator based in high-order sliding modes that is global and exact. This differentiator can also be used for monitoring and estimation purposes in robotics systems. Simulation results are presented using as inputs the experimental data acquired from a vehicle going through a circuit with ground obstacles.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"17 1","pages":"99-104"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85027469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981611
Vincent Dietrich, Bernd Kast, Michael Fiegert, Sebastian Albrecht, M. Beetz
The configuration of perception pipelines is a complex procedure that requires substantial amounts of engineering effort and knowledge. A pipeline consists of interconnected individual perception operators and their parameters, which leads to a large configuration space of pipeline structures and parameterizations. This configuration space has to be explored efficiently in order to find a solution that fulfills the specific requirements of the target application. In this paper, we present an approach to perform automatic configuration based on structure templates and sequential model-based optimization. The structure templates allow to reduce the search space and encode prior engineering knowledge. We introduce a structure template based on hypothesis generation, hypothesis refinement, and hypothesis testing to demonstrate the effectiveness of the approach. Experimental evaluation with state-of-the-art operators is performed on data from the T-LESS dataset as well as in a real-world robotic assembly task.
{"title":"Automatic Configuration of the Structure and Parameterization of Perception Pipelines","authors":"Vincent Dietrich, Bernd Kast, Michael Fiegert, Sebastian Albrecht, M. Beetz","doi":"10.1109/ICAR46387.2019.8981611","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981611","url":null,"abstract":"The configuration of perception pipelines is a complex procedure that requires substantial amounts of engineering effort and knowledge. A pipeline consists of interconnected individual perception operators and their parameters, which leads to a large configuration space of pipeline structures and parameterizations. This configuration space has to be explored efficiently in order to find a solution that fulfills the specific requirements of the target application. In this paper, we present an approach to perform automatic configuration based on structure templates and sequential model-based optimization. The structure templates allow to reduce the search space and encode prior engineering knowledge. We introduce a structure template based on hypothesis generation, hypothesis refinement, and hypothesis testing to demonstrate the effectiveness of the approach. Experimental evaluation with state-of-the-art operators is performed on data from the T-LESS dataset as well as in a real-world robotic assembly task.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"34 1","pages":"312-319"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87990483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981598
Luiz Afonso Marão, Larissa Casteluci, Ricardo V. Godoy, Henrique B. Garcia, D. V. Magalhães, G. Caurin
This paper presents a Deep Reinforcement Learning agent for a 4-wheeled rover in a multi-goal competition task, under the influence of noisy GPS measurements. A previous related work has implemented a similar agent to the same task using only the raw dynamics measurements as observations. The Proximal Policy Optimization algorithm combined to Universal Value Function Approximators resulted in a system able to successfully overcome very noisy GPS observations and complete the challenge task. This work introduced a frontal camera to add visual input to the rover observations during the task execution. The main change on the algorithm is on the neural networks' architectures, in which a second input layer was added to deal with the image observations. In a few alternate versions of the networks, Long Short-Term Memory (LSTM) cells were included in the architecture as well. The addition of the camera did not present a significant increase in stability or performance of the network, and the computation time require increased.
{"title":"Deep Reinforcement Learning Control of an Autonomous Wheeled Robot in a Challenge Task: Combined Visual and Dynamics Sensoring","authors":"Luiz Afonso Marão, Larissa Casteluci, Ricardo V. Godoy, Henrique B. Garcia, D. V. Magalhães, G. Caurin","doi":"10.1109/ICAR46387.2019.8981598","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981598","url":null,"abstract":"This paper presents a Deep Reinforcement Learning agent for a 4-wheeled rover in a multi-goal competition task, under the influence of noisy GPS measurements. A previous related work has implemented a similar agent to the same task using only the raw dynamics measurements as observations. The Proximal Policy Optimization algorithm combined to Universal Value Function Approximators resulted in a system able to successfully overcome very noisy GPS observations and complete the challenge task. This work introduced a frontal camera to add visual input to the rover observations during the task execution. The main change on the algorithm is on the neural networks' architectures, in which a second input layer was added to deal with the image observations. In a few alternate versions of the networks, Long Short-Term Memory (LSTM) cells were included in the architecture as well. The addition of the camera did not present a significant increase in stability or performance of the network, and the computation time require increased.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"217 1","pages":"368-373"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89103423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981544
P. Lutz, M. J. Schuster, Florian Steidle
Local positioning technologies based on ultrawideband (UWB) ranging have become broadly available and accurate enough for various robotic applications. In an infrastructure setup with static anchor radio modules one common problem is to determine their global positions within the world coordinate frame. Furthermore, issues like the complex radiofrequency wave propagation properties make it difficult to design a consistent sensor error model which generalizes well across different anchor setups and environments. Combining radio based local positioning systems with a visual-inertial navigation system (VINS) can provide very accurate pose estimates for calibration of the radio based localization modules and at the same time alleviate the inherent drift in visual-inertial navigation. We propose an approach to utilize a visual-inertial SLAM system using fish-eye stereo cameras and an IMU to estimate the anchor 6D poses as well as the parameters of an UWB module sensor error model on a micro-aerial-vehicle (MAV). Fiducial markers on all anchor radio modules are used as artificial landmarks within the SLAM system to get accurate anchor module pose estimates. Index Terms-MAVs, mobile robots, SLAM, UWB, radio localization, sensor calibration
{"title":"Visual-inertial SLAM aided estimation of anchor poses and sensor error model parameters of UWB radio modules","authors":"P. Lutz, M. J. Schuster, Florian Steidle","doi":"10.1109/ICAR46387.2019.8981544","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981544","url":null,"abstract":"Local positioning technologies based on ultrawideband (UWB) ranging have become broadly available and accurate enough for various robotic applications. In an infrastructure setup with static anchor radio modules one common problem is to determine their global positions within the world coordinate frame. Furthermore, issues like the complex radiofrequency wave propagation properties make it difficult to design a consistent sensor error model which generalizes well across different anchor setups and environments. Combining radio based local positioning systems with a visual-inertial navigation system (VINS) can provide very accurate pose estimates for calibration of the radio based localization modules and at the same time alleviate the inherent drift in visual-inertial navigation. We propose an approach to utilize a visual-inertial SLAM system using fish-eye stereo cameras and an IMU to estimate the anchor 6D poses as well as the parameters of an UWB module sensor error model on a micro-aerial-vehicle (MAV). Fiducial markers on all anchor radio modules are used as artificial landmarks within the SLAM system to get accurate anchor module pose estimates. Index Terms-MAVs, mobile robots, SLAM, UWB, radio localization, sensor calibration","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"30 1","pages":"739-746"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78619753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981642
R. Andrade, A. B. Filho, C. Vimieiro, M. Pinotti
This paper presents a study of energy consumption of the AMRK, an Active Magnetorheological Knee actuator developed for transfemoral prostheses. The system consists of an active motor-unit composed by an EC motor, harmonic drive and magnetorheological (MR) clutch; that is displayed in parallel to an MR brake. With this configuration, the AMRK can work as a motor, clutch, or brake, reproducing movements similar to those of a healthy knee. We used the dynamic models of the MR clutch, MR brake and motor unit to simulate the energy consumption during over-ground walking under three different situations: using the complete AMRK, using just the motor-reducer of the AMRK, to simulate a common active prosthesis (CAKP), and using just the MR brake, to simulate a common semi-active prosthesis (CSAKP). The operation strategy of AMRK uses the motor-unit only when concentric contraction is required to raise the body center of gravity during midstance. When power dissipation is required, only the MR brake operates. The results show that the AMRK spends just 14.8 J during the gait cycle, with is 3.9 times lower than the CAKP (57.2 J), while the CSAKP spends just 6.0 J.
{"title":"Evaluating Energy Consumption of an Active Magnetorheological Knee Prosthesis","authors":"R. Andrade, A. B. Filho, C. Vimieiro, M. Pinotti","doi":"10.1109/ICAR46387.2019.8981642","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981642","url":null,"abstract":"This paper presents a study of energy consumption of the AMRK, an Active Magnetorheological Knee actuator developed for transfemoral prostheses. The system consists of an active motor-unit composed by an EC motor, harmonic drive and magnetorheological (MR) clutch; that is displayed in parallel to an MR brake. With this configuration, the AMRK can work as a motor, clutch, or brake, reproducing movements similar to those of a healthy knee. We used the dynamic models of the MR clutch, MR brake and motor unit to simulate the energy consumption during over-ground walking under three different situations: using the complete AMRK, using just the motor-reducer of the AMRK, to simulate a common active prosthesis (CAKP), and using just the MR brake, to simulate a common semi-active prosthesis (CSAKP). The operation strategy of AMRK uses the motor-unit only when concentric contraction is required to raise the body center of gravity during midstance. When power dissipation is required, only the MR brake operates. The results show that the AMRK spends just 14.8 J during the gait cycle, with is 3.9 times lower than the CAKP (57.2 J), while the CSAKP spends just 6.0 J.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"27 1","pages":"75-80"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78696148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981601
W. Reis, O. Morandin, K. Vivaldini
One key aspect of the use of Automated Guided Vehicles in an industrial environment is its localization effectiveness. Among the existing techniques, the use of a laser scanner stands out. Besides, the Adaptive Monte Carlo Localization algorithm has become a reference in academic research. Despite many works use the AMCL package, they do not fully discuss the effect of the parameters change on the algorithm response and its tuning. This work aims to examine the distinct influence of each tested parameter in AGV localization. We performed the experiments in the same environment, and the AGV ran the same path to enable comparison against the parameters variation. For the 7 parameters tested, the results show the relationship between the package parameters and the localization response behavior. Although the article does not aim to propose the best parameter tuning, the results show the direction to follow in values adjusting.
{"title":"A Quantitative Study of Tuning ROS Adaptive Monte Carlo Localization Parameters and their Effect on an AGV Localization","authors":"W. Reis, O. Morandin, K. Vivaldini","doi":"10.1109/ICAR46387.2019.8981601","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981601","url":null,"abstract":"One key aspect of the use of Automated Guided Vehicles in an industrial environment is its localization effectiveness. Among the existing techniques, the use of a laser scanner stands out. Besides, the Adaptive Monte Carlo Localization algorithm has become a reference in academic research. Despite many works use the AMCL package, they do not fully discuss the effect of the parameters change on the algorithm response and its tuning. This work aims to examine the distinct influence of each tested parameter in AGV localization. We performed the experiments in the same environment, and the AGV ran the same path to enable comparison against the parameters variation. For the 7 parameters tested, the results show the relationship between the package parameters and the localization response behavior. Although the article does not aim to propose the best parameter tuning, the results show the direction to follow in values adjusting.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"67 1","pages":"302-307"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81105896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981549
Howard Mahe, Denis Marraud, Andrew I. Comport
This paper presents methods for performing realtime semantic SLAM aimed at autonomous navigation and control of a humanoid robot in a manufacturing scenario. A novel multi-keyframe approach is proposed that simultaneously minimizes a semantic cost based on class-level features in addition to common photometric and geometric costs. The approach is shown to robustly construct a 3D map with associated class labels relevant to robotic tasks. Alternatively to existing approaches, the segmentation of these semantic classes have been learnt using RGB-D sensor data aligned with an industrial CAD manufacturing model to obtain noisy pixel-wise labels. This dataset confronts the proposed approach in a complicated real-world setting and provides insight into the practical use case scenarios. The semantic segmentation network was fine tuned for the given use case and was trained in a semi-supervised manner using noisy labels. The developed software is real-time and integrated with ROS to obtain a complete semantic reconstruction for the control and navigation of the HRP4 robot. Experiments in-situ at the Airbus manufacturing site in Saint-Nazaire validate the proposed approach.
{"title":"Real-time RGB-D semantic keyframe SLAM based on image segmentation learning from industrial CAD models","authors":"Howard Mahe, Denis Marraud, Andrew I. Comport","doi":"10.1109/ICAR46387.2019.8981549","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981549","url":null,"abstract":"This paper presents methods for performing realtime semantic SLAM aimed at autonomous navigation and control of a humanoid robot in a manufacturing scenario. A novel multi-keyframe approach is proposed that simultaneously minimizes a semantic cost based on class-level features in addition to common photometric and geometric costs. The approach is shown to robustly construct a 3D map with associated class labels relevant to robotic tasks. Alternatively to existing approaches, the segmentation of these semantic classes have been learnt using RGB-D sensor data aligned with an industrial CAD manufacturing model to obtain noisy pixel-wise labels. This dataset confronts the proposed approach in a complicated real-world setting and provides insight into the practical use case scenarios. The semantic segmentation network was fine tuned for the given use case and was trained in a semi-supervised manner using noisy labels. The developed software is real-time and integrated with ROS to obtain a complete semantic reconstruction for the control and navigation of the HRP4 robot. Experiments in-situ at the Airbus manufacturing site in Saint-Nazaire validate the proposed approach.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"7 1","pages":"147-154"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88848084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981647
M. Franzius
Increasing numbers of devices are equipped with cameras generating large amounts of images. State of the art technologies allow to automatically identify relevant and aesthetically pleasing images after they were stored. Here, we demonstrate a robot that estimates the gradient of image aesthetics in its environment and actively navigates towards the maximum. Aesthetics navigation is integrated into a modified robotic lawnmower, switching online between tasks based on estimated aesthetics scores. This behavior generates higher aesthetics scores than offline selection of images captured during standard behavior. The proposed system extends robotic behavior from the purely functional towards a cooperative and empathic level.
{"title":"Towards Beauty: Robot Following Aesthetics Gradients","authors":"M. Franzius","doi":"10.1109/ICAR46387.2019.8981647","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981647","url":null,"abstract":"Increasing numbers of devices are equipped with cameras generating large amounts of images. State of the art technologies allow to automatically identify relevant and aesthetically pleasing images after they were stored. Here, we demonstrate a robot that estimates the gradient of image aesthetics in its environment and actively navigates towards the maximum. Aesthetics navigation is integrated into a modified robotic lawnmower, switching online between tasks based on estimated aesthetics scores. This behavior generates higher aesthetics scores than offline selection of images captured during standard behavior. The proposed system extends robotic behavior from the purely functional towards a cooperative and empathic level.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"117 22","pages":"55-60"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91403644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981568
Padmaja Kulkarni, Sven Schneider, Maren Bennewitz, D. Schulz, P. Plöger
Teleoperation is still the de-facto mode of operation for robotic manipulators in hazardous and unknown environments. The objective is to move the manipulator under the influence of a plenitude of constraints, mainly following the human operator's commands, but also the avoidance of adverse effects such as joint limits or the exertion of external forces. A classic approach to incorporate such non-instantaneous behavior into the instantaneous motion of the kinematic chain is the Closed-Loop Inverse Kinematics (CLIK) control scheme. In this paper, we present PV-CLIK, a novel CLIK realization that for the first time practically applies the Popov-Vereshchagin (PV) hybrid dynamics solver to map the instantaneous constraints to motion commands. By relying on the PV solver, PV-CLIK offers several benefits over traditional CLIK implementations such as linear runtime complexity, handling constraints on the dynamics level or fostering composable software architectures. In the experimental evaluation, we show that our implementation of PV-CLIK outperforms existing kinematics solvers in Cartesian trajectory-following tasks at high velocities.
{"title":"Applying the Popov-Vereshchagin Hybrid Dynamics Solver for Teleoperation under Instantaneous Constraints","authors":"Padmaja Kulkarni, Sven Schneider, Maren Bennewitz, D. Schulz, P. Plöger","doi":"10.1109/ICAR46387.2019.8981568","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981568","url":null,"abstract":"Teleoperation is still the de-facto mode of operation for robotic manipulators in hazardous and unknown environments. The objective is to move the manipulator under the influence of a plenitude of constraints, mainly following the human operator's commands, but also the avoidance of adverse effects such as joint limits or the exertion of external forces. A classic approach to incorporate such non-instantaneous behavior into the instantaneous motion of the kinematic chain is the Closed-Loop Inverse Kinematics (CLIK) control scheme. In this paper, we present PV-CLIK, a novel CLIK realization that for the first time practically applies the Popov-Vereshchagin (PV) hybrid dynamics solver to map the instantaneous constraints to motion commands. By relying on the PV solver, PV-CLIK offers several benefits over traditional CLIK implementations such as linear runtime complexity, handling constraints on the dynamics level or fostering composable software architectures. In the experimental evaluation, we show that our implementation of PV-CLIK outperforms existing kinematics solvers in Cartesian trajectory-following tasks at high velocities.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"11 1","pages":"673-680"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87281518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981643
Robson O. de Santana, L. Mozelli, A. A. Neto
A strategy for autonomous landing of Micro Aerial Vehicles (MAVs) on moving platforms is presented, based only on visual information from a monocular camera. The landing target is uniquely identified by previously known Augmented Reality (AR) markers, and its relative pose is estimated by visual servoing algorithms. Target trajectory in $mathbb{R}^{3}$ is composed of planar translation and vertical oscillation, simulating a vessel that travels in foul weather. The visual feedback helps the aerial robot to track this vessel, while a trajectory planning method, based on the system's model, allows predicting its future pose. Simulated results using the ROS framework are used to verify the effectiveness of our proposed method.
{"title":"Vision-based Autonomous Landing for Micro Aerial Vehicles on Targets Moving in 3D Space","authors":"Robson O. de Santana, L. Mozelli, A. A. Neto","doi":"10.1109/ICAR46387.2019.8981643","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981643","url":null,"abstract":"A strategy for autonomous landing of Micro Aerial Vehicles (MAVs) on moving platforms is presented, based only on visual information from a monocular camera. The landing target is uniquely identified by previously known Augmented Reality (AR) markers, and its relative pose is estimated by visual servoing algorithms. Target trajectory in $mathbb{R}^{3}$ is composed of planar translation and vertical oscillation, simulating a vessel that travels in foul weather. The visual feedback helps the aerial robot to track this vessel, while a trajectory planning method, based on the system's model, allows predicting its future pose. Simulated results using the ROS framework are used to verify the effectiveness of our proposed method.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"27 1","pages":"541-546"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79839481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}