{"title":"Camera Based Localization for Autonomous UAV Formation Flight","authors":"Z. Mahboubi, Zico Kolter, Tao Wang, G. Bower","doi":"10.2514/6.2011-1658","DOIUrl":null,"url":null,"abstract":"This work considers the task of accurate in-air localization for multiple unmanned or autonomous aerial vehicles flying in close formation. The paper describes our experimental setup using two small UAVs and the details of the localization algorithm. The algorithm was implemented on two low-cost, electric powered, remote control aircraft with wing spans of approximately 2 meters. Our control software, running on an onboard x86 CPU, uses LQG control (an LQR controller coupled with an EKF state estimator) and a linearized state space model to control both aircraft to fly synchronized circles. In addition to its control system, the lead aircraft is outfitted with a known pattern of high-intensity LED lights. The trailing aircraft captures images of these LEDs with a camera and uses the Orthogonal Iteration computer vision algorithm to determine the relative position and orientation of the trailing aircraft with respect to the lead aircraft at 25Hz. The entire process is carried-out in real-time with both vehicles flying autonomously. We note that the camera based system is used for localization, but not yet for closed-loop control. Although, an absolute quantification of the error for the in-air localization system is difficult as we do not have ground truth positioning data during flight testing, our simulation results analysis and indoor measurements suggest that we can achieve localization accuracy on the order of 10 cm (5% wingspan) when the UAVs are separated by a distance of about 10 meters (5 spans).","PeriodicalId":269486,"journal":{"name":"Infotech@Aerospace 2011","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"40","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Infotech@Aerospace 2011","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2514/6.2011-1658","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 40
Abstract
This work considers the task of accurate in-air localization for multiple unmanned or autonomous aerial vehicles flying in close formation. The paper describes our experimental setup using two small UAVs and the details of the localization algorithm. The algorithm was implemented on two low-cost, electric powered, remote control aircraft with wing spans of approximately 2 meters. Our control software, running on an onboard x86 CPU, uses LQG control (an LQR controller coupled with an EKF state estimator) and a linearized state space model to control both aircraft to fly synchronized circles. In addition to its control system, the lead aircraft is outfitted with a known pattern of high-intensity LED lights. The trailing aircraft captures images of these LEDs with a camera and uses the Orthogonal Iteration computer vision algorithm to determine the relative position and orientation of the trailing aircraft with respect to the lead aircraft at 25Hz. The entire process is carried-out in real-time with both vehicles flying autonomously. We note that the camera based system is used for localization, but not yet for closed-loop control. Although, an absolute quantification of the error for the in-air localization system is difficult as we do not have ground truth positioning data during flight testing, our simulation results analysis and indoor measurements suggest that we can achieve localization accuracy on the order of 10 cm (5% wingspan) when the UAVs are separated by a distance of about 10 meters (5 spans).