{"title":"联合战术空中补给车的自主精确着陆","authors":"S. Recker, C. Gribble, M. Butkiewicz","doi":"10.1109/AIPR.2018.8707418","DOIUrl":null,"url":null,"abstract":"We discuss the precision autonomous landing features of the Joint Tactical Aerial Resupply Vehicle (JTARV) platform. Autonomous navigation for aerial vehicles demands that computer vision algorithms provide not only relevant, actionable information, but that they do so in a timely manner—i.e., the algorithms must operate in real-time. This requirement for high performance dictates optimization at every level, which is the focus of our on-going research and development efforts for adding autonomous features to JTARV. Autonomous precision landing capabilities are enabled by high-performance deep learning and structure-from-motion techniques optimized for NVIDIA mobile GPUs. The system uses a single downward-facing camera to guide the vehicle to a coded photogrammetry target, ultimately enabling fully autonomous aerial resupply for troops on the ground. This paper details the system architecture and perception system design and evaluates performance on a scale vehicle. Results demonstrate that the system is capable of landing on stationary targets within relatively narrow spaces.","PeriodicalId":230582,"journal":{"name":"2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Autonomous Precision Landing for the Joint Tactical Aerial Resupply Vehicle\",\"authors\":\"S. Recker, C. Gribble, M. Butkiewicz\",\"doi\":\"10.1109/AIPR.2018.8707418\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We discuss the precision autonomous landing features of the Joint Tactical Aerial Resupply Vehicle (JTARV) platform. Autonomous navigation for aerial vehicles demands that computer vision algorithms provide not only relevant, actionable information, but that they do so in a timely manner—i.e., the algorithms must operate in real-time. This requirement for high performance dictates optimization at every level, which is the focus of our on-going research and development efforts for adding autonomous features to JTARV. Autonomous precision landing capabilities are enabled by high-performance deep learning and structure-from-motion techniques optimized for NVIDIA mobile GPUs. The system uses a single downward-facing camera to guide the vehicle to a coded photogrammetry target, ultimately enabling fully autonomous aerial resupply for troops on the ground. This paper details the system architecture and perception system design and evaluates performance on a scale vehicle. Results demonstrate that the system is capable of landing on stationary targets within relatively narrow spaces.\",\"PeriodicalId\":230582,\"journal\":{\"name\":\"2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIPR.2018.8707418\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2018.8707418","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Autonomous Precision Landing for the Joint Tactical Aerial Resupply Vehicle
We discuss the precision autonomous landing features of the Joint Tactical Aerial Resupply Vehicle (JTARV) platform. Autonomous navigation for aerial vehicles demands that computer vision algorithms provide not only relevant, actionable information, but that they do so in a timely manner—i.e., the algorithms must operate in real-time. This requirement for high performance dictates optimization at every level, which is the focus of our on-going research and development efforts for adding autonomous features to JTARV. Autonomous precision landing capabilities are enabled by high-performance deep learning and structure-from-motion techniques optimized for NVIDIA mobile GPUs. The system uses a single downward-facing camera to guide the vehicle to a coded photogrammetry target, ultimately enabling fully autonomous aerial resupply for troops on the ground. This paper details the system architecture and perception system design and evaluates performance on a scale vehicle. Results demonstrate that the system is capable of landing on stationary targets within relatively narrow spaces.