{"title":"基于视觉的微型飞行器在三维空间运动目标上的自主降落","authors":"Robson O. de Santana, L. Mozelli, A. A. Neto","doi":"10.1109/ICAR46387.2019.8981643","DOIUrl":null,"url":null,"abstract":"A strategy for autonomous landing of Micro Aerial Vehicles (MAVs) on moving platforms is presented, based only on visual information from a monocular camera. The landing target is uniquely identified by previously known Augmented Reality (AR) markers, and its relative pose is estimated by visual servoing algorithms. Target trajectory in $\\mathbb{R}^{3}$ is composed of planar translation and vertical oscillation, simulating a vessel that travels in foul weather. The visual feedback helps the aerial robot to track this vessel, while a trajectory planning method, based on the system's model, allows predicting its future pose. Simulated results using the ROS framework are used to verify the effectiveness of our proposed method.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"27 1","pages":"541-546"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Vision-based Autonomous Landing for Micro Aerial Vehicles on Targets Moving in 3D Space\",\"authors\":\"Robson O. de Santana, L. Mozelli, A. A. Neto\",\"doi\":\"10.1109/ICAR46387.2019.8981643\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A strategy for autonomous landing of Micro Aerial Vehicles (MAVs) on moving platforms is presented, based only on visual information from a monocular camera. The landing target is uniquely identified by previously known Augmented Reality (AR) markers, and its relative pose is estimated by visual servoing algorithms. Target trajectory in $\\\\mathbb{R}^{3}$ is composed of planar translation and vertical oscillation, simulating a vessel that travels in foul weather. The visual feedback helps the aerial robot to track this vessel, while a trajectory planning method, based on the system's model, allows predicting its future pose. Simulated results using the ROS framework are used to verify the effectiveness of our proposed method.\",\"PeriodicalId\":6606,\"journal\":{\"name\":\"2019 19th International Conference on Advanced Robotics (ICAR)\",\"volume\":\"27 1\",\"pages\":\"541-546\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 19th International Conference on Advanced Robotics (ICAR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAR46387.2019.8981643\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 19th International Conference on Advanced Robotics (ICAR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAR46387.2019.8981643","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Vision-based Autonomous Landing for Micro Aerial Vehicles on Targets Moving in 3D Space
A strategy for autonomous landing of Micro Aerial Vehicles (MAVs) on moving platforms is presented, based only on visual information from a monocular camera. The landing target is uniquely identified by previously known Augmented Reality (AR) markers, and its relative pose is estimated by visual servoing algorithms. Target trajectory in $\mathbb{R}^{3}$ is composed of planar translation and vertical oscillation, simulating a vessel that travels in foul weather. The visual feedback helps the aerial robot to track this vessel, while a trajectory planning method, based on the system's model, allows predicting its future pose. Simulated results using the ROS framework are used to verify the effectiveness of our proposed method.