This paper provides an analytical framework to address the definition of sensing requirements in non-cooperative UAS sense and avoid. The generality of the approach makes it useful for the exploration of sensor design and selection trade-offs, for the definition of tailored and adaptive sensing strategies, and for the evaluation of the potential of given sensing architectures, also concerning their interface to airspace rules and traffic characteristics. The framework comprises a set of analytical relations covering the following technical aspects: field of view and surveillance rate requirements in azimuth and elevation; the link between sensing accuracy and closest point of approach estimates, expressed though approximated derivatives valid in near-collision conditions; the diverse (but interconnected) effects of sensing accuracy and detection range on the probabilities of missed and false conflict detections. A key idea consists of focusing on a specific target time to closest point of approach at obstacle declaration as the key driver for sensing system design and tuning, which allows accounting for the variability of conflict conditions within the aircraft field of regard. Numerical analyses complement the analytical developments to demonstrate their statistical consistency and to show quantitative examples of the variation of sensing performance as a function of the conflict geometry, as well as highlighting potential implications of the derived concepts. The developed framework can potentially be used to support holistic approaches and evaluations in different scenarios, including the very low-altitude urban airspace.
{"title":"Analytical Framework for Sensing Requirements Definition in Non-Cooperative UAS Sense and Avoid","authors":"Giancarmine Fasano, Roberto Opromolla","doi":"10.3390/drones7100621","DOIUrl":"https://doi.org/10.3390/drones7100621","url":null,"abstract":"This paper provides an analytical framework to address the definition of sensing requirements in non-cooperative UAS sense and avoid. The generality of the approach makes it useful for the exploration of sensor design and selection trade-offs, for the definition of tailored and adaptive sensing strategies, and for the evaluation of the potential of given sensing architectures, also concerning their interface to airspace rules and traffic characteristics. The framework comprises a set of analytical relations covering the following technical aspects: field of view and surveillance rate requirements in azimuth and elevation; the link between sensing accuracy and closest point of approach estimates, expressed though approximated derivatives valid in near-collision conditions; the diverse (but interconnected) effects of sensing accuracy and detection range on the probabilities of missed and false conflict detections. A key idea consists of focusing on a specific target time to closest point of approach at obstacle declaration as the key driver for sensing system design and tuning, which allows accounting for the variability of conflict conditions within the aircraft field of regard. Numerical analyses complement the analytical developments to demonstrate their statistical consistency and to show quantitative examples of the variation of sensing performance as a function of the conflict geometry, as well as highlighting potential implications of the derived concepts. The developed framework can potentially be used to support holistic approaches and evaluations in different scenarios, including the very low-altitude urban airspace.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135743661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhen Cao, Lammert Kooistra, Wensheng Wang, Leifeng Guo, João Valente
Real-time object detection based on UAV remote sensing is widely required in different scenarios. In the past 20 years, with the development of unmanned aerial vehicles (UAV), remote sensing technology, deep learning technology, and edge computing technology, research on UAV real-time object detection in different fields has become increasingly important. However, since real-time UAV object detection is a comprehensive task involving hardware, algorithms, and other components, the complete implementation of real-time object detection is often overlooked. Although there is a large amount of literature on real-time object detection based on UAV remote sensing, little attention has been given to its workflow. This paper aims to systematically review previous studies about UAV real-time object detection from application scenarios, hardware selection, real-time detection paradigms, detection algorithms and their optimization technologies, and evaluation metrics. Through visual and narrative analyses, the conclusions cover all proposed research questions. Real-time object detection is more in demand in scenarios such as emergency rescue and precision agriculture. Multi-rotor UAVs and RGB images are of more interest in applications, and real-time detection mainly uses edge computing with documented processing strategies. GPU-based edge computing platforms are widely used, and deep learning algorithms is preferred for real-time detection. Meanwhile, optimization algorithms need to be focused on resource-limited computing platform deployment, such as lightweight convolutional layers, etc. In addition to accuracy, speed, latency, and energy are equally important evaluation metrics. Finally, this paper thoroughly discusses the challenges of sensor-, edge computing-, and algorithm-related lightweight technologies in real-time object detection. It also discusses the prospective impact of future developments in autonomous UAVs and communications on UAV real-time target detection.
{"title":"Real-Time Object Detection Based on UAV Remote Sensing: A Systematic Literature Review","authors":"Zhen Cao, Lammert Kooistra, Wensheng Wang, Leifeng Guo, João Valente","doi":"10.3390/drones7100620","DOIUrl":"https://doi.org/10.3390/drones7100620","url":null,"abstract":"Real-time object detection based on UAV remote sensing is widely required in different scenarios. In the past 20 years, with the development of unmanned aerial vehicles (UAV), remote sensing technology, deep learning technology, and edge computing technology, research on UAV real-time object detection in different fields has become increasingly important. However, since real-time UAV object detection is a comprehensive task involving hardware, algorithms, and other components, the complete implementation of real-time object detection is often overlooked. Although there is a large amount of literature on real-time object detection based on UAV remote sensing, little attention has been given to its workflow. This paper aims to systematically review previous studies about UAV real-time object detection from application scenarios, hardware selection, real-time detection paradigms, detection algorithms and their optimization technologies, and evaluation metrics. Through visual and narrative analyses, the conclusions cover all proposed research questions. Real-time object detection is more in demand in scenarios such as emergency rescue and precision agriculture. Multi-rotor UAVs and RGB images are of more interest in applications, and real-time detection mainly uses edge computing with documented processing strategies. GPU-based edge computing platforms are widely used, and deep learning algorithms is preferred for real-time detection. Meanwhile, optimization algorithms need to be focused on resource-limited computing platform deployment, such as lightweight convolutional layers, etc. In addition to accuracy, speed, latency, and energy are equally important evaluation metrics. Finally, this paper thoroughly discusses the challenges of sensor-, edge computing-, and algorithm-related lightweight technologies in real-time object detection. It also discusses the prospective impact of future developments in autonomous UAVs and communications on UAV real-time target detection.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135740254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jamal Elfarkh, Kasper Johansen, Victor Angulo, Omar Lopez Camargo, Matthew F. McCabe
Land Surface Temperature (LST) is a key variable used across various applications, including irrigation monitoring, vegetation health assessment and urban heat island studies. While satellites offer moderate-resolution LST data, unmanned aerial vehicles (UAVs) provide high-resolution thermal infrared measurements. However, the continuous and rapid variation in LST makes the production of orthomosaics from UAV-based image collections challenging. Understanding the environmental and meteorological factors that amplify this variation is necessary to select the most suitable conditions for collecting UAV-based thermal data. Here, we capture variations in LST while hovering for 15–20 min over diverse surfaces, covering sand, water, grass, and an olive tree orchard. The impact of different flying heights and times of the day was examined, with all collected thermal data evaluated against calibrated field-based Apogee SI-111 sensors. The evaluation showed a significant error in UAV-based data associated with wind speed, which increased the bias from −1.02 to 3.86 °C for 0.8 to 8.5 m/s winds, respectively. Different surfaces, albeit under varying ambient conditions, showed temperature variations ranging from 1.4 to 6 °C during the flights. The temperature variations observed while hovering were linked to solar radiation, specifically radiation fluctuations occurring after sunrise and before sunset. Irrigation and atmospheric conditions (i.e., thin clouds) also contributed to observed temperature variations. This research offers valuable insights into LST variations during standard 15–20 min UAV flights under diverse environmental conditions. Understanding these factors is essential for developing correction procedures and considering data inconsistencies when processing and interpreting UAV-based thermal infrared data and derived orthomosaics.
{"title":"Quantifying Within-Flight Variation in Land Surface Temperature from a UAV-Based Thermal Infrared Camera","authors":"Jamal Elfarkh, Kasper Johansen, Victor Angulo, Omar Lopez Camargo, Matthew F. McCabe","doi":"10.3390/drones7100617","DOIUrl":"https://doi.org/10.3390/drones7100617","url":null,"abstract":"Land Surface Temperature (LST) is a key variable used across various applications, including irrigation monitoring, vegetation health assessment and urban heat island studies. While satellites offer moderate-resolution LST data, unmanned aerial vehicles (UAVs) provide high-resolution thermal infrared measurements. However, the continuous and rapid variation in LST makes the production of orthomosaics from UAV-based image collections challenging. Understanding the environmental and meteorological factors that amplify this variation is necessary to select the most suitable conditions for collecting UAV-based thermal data. Here, we capture variations in LST while hovering for 15–20 min over diverse surfaces, covering sand, water, grass, and an olive tree orchard. The impact of different flying heights and times of the day was examined, with all collected thermal data evaluated against calibrated field-based Apogee SI-111 sensors. The evaluation showed a significant error in UAV-based data associated with wind speed, which increased the bias from −1.02 to 3.86 °C for 0.8 to 8.5 m/s winds, respectively. Different surfaces, albeit under varying ambient conditions, showed temperature variations ranging from 1.4 to 6 °C during the flights. The temperature variations observed while hovering were linked to solar radiation, specifically radiation fluctuations occurring after sunrise and before sunset. Irrigation and atmospheric conditions (i.e., thin clouds) also contributed to observed temperature variations. This research offers valuable insights into LST variations during standard 15–20 min UAV flights under diverse environmental conditions. Understanding these factors is essential for developing correction procedures and considering data inconsistencies when processing and interpreting UAV-based thermal infrared data and derived orthomosaics.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135898012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Drone images contain a large number of small, dense targets. And they are vital for agriculture, security, monitoring, and more. However, detecting small objects remains an unsolved challenge, as they occupy a small proportion of the image and have less distinct features. Conventional object detection algorithms fail to produce satisfactory results for small objects. To address this issue, an improved algorithm for small object detection is proposed by modifying the YOLOv7 network structure. Firstly, redundant detection head for large objects is removed, and the feature extraction for small object detection advances. Secondly, the number of anchor boxes is increased to improve the recall rate for small objects. And, considering the limitations of the CIoU loss function in optimization, the EIoU loss function is employed as the bounding box loss function, to achieve more stable and effective regression. Lastly, an attention-based feature fusion module is introduced to replace the Concat module in FPN. This module considers both global and local information, effectively addressing the challenges in multiscale and small object fusion. Experimental results on the VisDrone2019 dataset demonstrate that the proposed algorithm achieves an mAP50 of 54.03% and an mAP50:90 of 32.06%, outperforming the latest similar research papers and significantly enhancing the model’s capability for small object detection in dense scenes.
{"title":"SODCNN: A Convolutional Neural Network Model for Small Object Detection in Drone-Captured Images","authors":"Lu Meng, Lijun Zhou, Yangqian Liu","doi":"10.3390/drones7100615","DOIUrl":"https://doi.org/10.3390/drones7100615","url":null,"abstract":"Drone images contain a large number of small, dense targets. And they are vital for agriculture, security, monitoring, and more. However, detecting small objects remains an unsolved challenge, as they occupy a small proportion of the image and have less distinct features. Conventional object detection algorithms fail to produce satisfactory results for small objects. To address this issue, an improved algorithm for small object detection is proposed by modifying the YOLOv7 network structure. Firstly, redundant detection head for large objects is removed, and the feature extraction for small object detection advances. Secondly, the number of anchor boxes is increased to improve the recall rate for small objects. And, considering the limitations of the CIoU loss function in optimization, the EIoU loss function is employed as the bounding box loss function, to achieve more stable and effective regression. Lastly, an attention-based feature fusion module is introduced to replace the Concat module in FPN. This module considers both global and local information, effectively addressing the challenges in multiscale and small object fusion. Experimental results on the VisDrone2019 dataset demonstrate that the proposed algorithm achieves an mAP50 of 54.03% and an mAP50:90 of 32.06%, outperforming the latest similar research papers and significantly enhancing the model’s capability for small object detection in dense scenes.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135459454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaofeng Fu, Guoting Wei, Xia Yuan, Yongshun Liang, Yuming Bo
In recent years, the rise of low-cost mini rotary-wing drone technology across diverse sectors has emphasized the crucial role of object detection within drone aerial imagery. Low-cost mini rotary-wing drones come with intrinsic limitations, especially in computational power. Drones come with intrinsic limitations, especially in resource availability. This context underscores an urgent need for solutions that synergize low latency, high precision, and computational efficiency. Previous methodologies have primarily depended on high-resolution images, leading to considerable computational burdens. To enhance the efficiency and accuracy of object detection in drone aerial images, and building on the YOLOv7, we propose the Efficient YOLOv7-Drone. Recognizing the common presence of small objects in aerial imagery, we eliminated the less efficient P5 detection head and incorporated the P2 detection head for increased precision in small object detection. To ensure efficient feature relay from the Backbone to the Neck, channels within the CBS module were optimized. To focus the model more on the foreground and reduce redundant computations, the TGM-CESC module was introduced, achieving the generation of pixel-level constrained sparse convolution masks. Furthermore, to mitigate potential data losses from sparse convolution, we embedded the head context-enhanced method (HCEM). Comprehensive evaluation using the VisDrone and UAVDT datasets demonstrated our model’s efficacy and practical applicability. The Efficient Yolov7-Drone achieved state-of-the-art scores while ensuring real-time detection performance.
{"title":"Efficient YOLOv7-Drone: An Enhanced Object Detection Approach for Drone Aerial Imagery","authors":"Xiaofeng Fu, Guoting Wei, Xia Yuan, Yongshun Liang, Yuming Bo","doi":"10.3390/drones7100616","DOIUrl":"https://doi.org/10.3390/drones7100616","url":null,"abstract":"In recent years, the rise of low-cost mini rotary-wing drone technology across diverse sectors has emphasized the crucial role of object detection within drone aerial imagery. Low-cost mini rotary-wing drones come with intrinsic limitations, especially in computational power. Drones come with intrinsic limitations, especially in resource availability. This context underscores an urgent need for solutions that synergize low latency, high precision, and computational efficiency. Previous methodologies have primarily depended on high-resolution images, leading to considerable computational burdens. To enhance the efficiency and accuracy of object detection in drone aerial images, and building on the YOLOv7, we propose the Efficient YOLOv7-Drone. Recognizing the common presence of small objects in aerial imagery, we eliminated the less efficient P5 detection head and incorporated the P2 detection head for increased precision in small object detection. To ensure efficient feature relay from the Backbone to the Neck, channels within the CBS module were optimized. To focus the model more on the foreground and reduce redundant computations, the TGM-CESC module was introduced, achieving the generation of pixel-level constrained sparse convolution masks. Furthermore, to mitigate potential data losses from sparse convolution, we embedded the head context-enhanced method (HCEM). Comprehensive evaluation using the VisDrone and UAVDT datasets demonstrated our model’s efficacy and practical applicability. The Efficient Yolov7-Drone achieved state-of-the-art scores while ensuring real-time detection performance.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135452469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Swarming is one of the important trends in the development of small multi-rotor UAVs. The stable operation of UAV swarms and air-to-ground cooperative operations depend on precise relative position information within the swarm. Existing relative localization solutions mainly rely on passively received external information or expensive and complex sensors, which are not applicable to the application scenarios of small-rotor UAV swarms. Therefore, we develop a relative localization solution based on airborne monocular sensing data to directly realize real-time relative localization among UAVs. First, we apply the lightweight YOLOv8-pose target detection algorithm to realize the real-time detection of quadcopter UAVs and their rotor motors. Then, to improve the computational efficiency, we make full use of the geometric properties of UAVs to derive a more adaptable algorithm for solving the P3P problem. In order to solve the multi-solution problem when less than four motors are detected, we analytically propose a positive solution determination scheme based on reasonable attitude information. We also introduce the maximum weight of the motor-detection confidence into the calculation of relative localization position to further improve the accuracy. Finally, we conducted simulations and practical experiments on an experimental UAV. The experimental results verify the feasibility of the proposed scheme, in which the performance of the core algorithm is significantly improved over the classical algorithm. Our research provides viable solutions to free UAV swarms from external information dependence, apply them to complex environments, improve autonomous collaboration, and reduce costs.
{"title":"Relative Localization within a Quadcopter Unmanned Aerial Vehicle Swarm Based on Airborne Monocular Vision","authors":"Xiaokun Si, Guozhen Xu, Mingxing Ke, Haiyan Zhang, Kaixiang Tong, Feng Qi","doi":"10.3390/drones7100612","DOIUrl":"https://doi.org/10.3390/drones7100612","url":null,"abstract":"Swarming is one of the important trends in the development of small multi-rotor UAVs. The stable operation of UAV swarms and air-to-ground cooperative operations depend on precise relative position information within the swarm. Existing relative localization solutions mainly rely on passively received external information or expensive and complex sensors, which are not applicable to the application scenarios of small-rotor UAV swarms. Therefore, we develop a relative localization solution based on airborne monocular sensing data to directly realize real-time relative localization among UAVs. First, we apply the lightweight YOLOv8-pose target detection algorithm to realize the real-time detection of quadcopter UAVs and their rotor motors. Then, to improve the computational efficiency, we make full use of the geometric properties of UAVs to derive a more adaptable algorithm for solving the P3P problem. In order to solve the multi-solution problem when less than four motors are detected, we analytically propose a positive solution determination scheme based on reasonable attitude information. We also introduce the maximum weight of the motor-detection confidence into the calculation of relative localization position to further improve the accuracy. Finally, we conducted simulations and practical experiments on an experimental UAV. The experimental results verify the feasibility of the proposed scheme, in which the performance of the core algorithm is significantly improved over the classical algorithm. Our research provides viable solutions to free UAV swarms from external information dependence, apply them to complex environments, improve autonomous collaboration, and reduce costs.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135199511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Advancements in aerial mobility (AAM) are driven by needs in transportation, logistics, rescue, and disaster relief. Consequently, large-sized multirotor unmanned aerial vehicles (UAVs) with strong power and ample space show great potential. In order to optimize the design process for large-sized multirotors and reduce physical trial and error, a detailed dynamic model is firstly established, with an accurate aerodynamic model. In addition, the center of gravity (CoG) offset and actuator dynamics are also well considered, which are usually ignored in small-sized multirotors. To improve the endurance and maneuverability of large-sized multirotors, which is the key concern in real applications, a two-loop optimization method for rotor tilt angle design is proposed based on the mathematical model established previously. Its inner loop solves the dynamic equilibrium points to relax the complex dynamic constraints caused by aerodynamics in the overall optimization problem, which improves the solution efficiency. The ideal design results can be obtained through the offline process, which greatly reduces the difficulties of physical trial and error. Finally, various experiments are carried out to demonstrate the accuracy of the established model and the effectiveness of the optimization method.
{"title":"Large-Sized Multirotor Design: Accurate Modeling with Aerodynamics and Optimization for Rotor Tilt Angle","authors":"Anhuan Xie, Xufei Yan, Weisheng Liang, Shiqiang Zhu, Zheng Chen","doi":"10.3390/drones7100614","DOIUrl":"https://doi.org/10.3390/drones7100614","url":null,"abstract":"Advancements in aerial mobility (AAM) are driven by needs in transportation, logistics, rescue, and disaster relief. Consequently, large-sized multirotor unmanned aerial vehicles (UAVs) with strong power and ample space show great potential. In order to optimize the design process for large-sized multirotors and reduce physical trial and error, a detailed dynamic model is firstly established, with an accurate aerodynamic model. In addition, the center of gravity (CoG) offset and actuator dynamics are also well considered, which are usually ignored in small-sized multirotors. To improve the endurance and maneuverability of large-sized multirotors, which is the key concern in real applications, a two-loop optimization method for rotor tilt angle design is proposed based on the mathematical model established previously. Its inner loop solves the dynamic equilibrium points to relax the complex dynamic constraints caused by aerodynamics in the overall optimization problem, which improves the solution efficiency. The ideal design results can be obtained through the offline process, which greatly reduces the difficulties of physical trial and error. Finally, various experiments are carried out to demonstrate the accuracy of the established model and the effectiveness of the optimization method.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135245913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study designs a navigation and landing scheme for an unmanned aerial vehicle (UAV) to autonomously land on an arbitrarily moving unmanned ground vehicle (UGV) in GPS-denied environments based on vision, ultra-wideband (UWB) and system information. In the approaching phase, an effective multi-innovation forgetting gradient (MIFG) algorithm is proposed to estimate the position of the UAV relative to the target using historical data (estimated distance and relative displacement measurements). Using these estimates, a saturated proportional navigation controller is developed, by which the UAV can approach the target, making the UGV enter the field of view (FOV) of the camera deployed in the UAV. Then, a sensor fusion estimation algorithm based on an extended Kalman filter (EKF) is proposed to achieve accurate landing. Finally, a numerical example and a real experiment are used to support the theoretical results.
{"title":"A Unmanned Aerial Vehicle (UAV)/Unmanned Ground Vehicle (UGV) Dynamic Autonomous Docking Scheme in GPS-Denied Environments","authors":"Cheng Cheng, Xiuxian Li, Lihua Xie, Li Li","doi":"10.3390/drones7100613","DOIUrl":"https://doi.org/10.3390/drones7100613","url":null,"abstract":"This study designs a navigation and landing scheme for an unmanned aerial vehicle (UAV) to autonomously land on an arbitrarily moving unmanned ground vehicle (UGV) in GPS-denied environments based on vision, ultra-wideband (UWB) and system information. In the approaching phase, an effective multi-innovation forgetting gradient (MIFG) algorithm is proposed to estimate the position of the UAV relative to the target using historical data (estimated distance and relative displacement measurements). Using these estimates, a saturated proportional navigation controller is developed, by which the UAV can approach the target, making the UGV enter the field of view (FOV) of the camera deployed in the UAV. Then, a sensor fusion estimation algorithm based on an extended Kalman filter (EKF) is proposed to achieve accurate landing. Finally, a numerical example and a real experiment are used to support the theoretical results.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135246546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is concerned with the robust collision-free guidance and control of underactuated multirotor aerial vehicles in the presence of moving obstacles capable of accelerating, linear velocity and rotor thrust constraints, and matched model uncertainties and disturbances. We address this problem by using a hierarchical flight control architecture composed of a supervisory outer-loop guidance module and an inner-loop stabilizing control one. The inner loop is designed using a typical hierarchical control scheme that nests the attitude control loop inside the position one. The effectiveness of this scheme relies on proper time-scale separation (TSS) between the closed-loop (faster) rotational and (slower) translational dynamics, which is not straightforward to enforce in practice. However, by combining an integral sliding mode attitude control law, which guarantees instantaneous tracking of the attitude commands, with a smooth and robust position control one, we enforce, by construction, the satisfaction of the TSS, thus avoiding the loss of robustness and use of a dull trial-and-error tweak of gains. On the other hand, the outer-loop guidance is built upon the continuous-control-obstacles method, which is incremented to respect the velocity and actuator constraints and avoid multiple moving obstacles that can accelerate. The overall method is evaluated using a numerical Monte Carlo simulation and is shown to be effective in providing satisfactory tracking performance, collision-free guidance, and the satisfaction of linear velocity and actuator constraints.
{"title":"Robust Collision-Free Guidance and Control for Underactuated Multirotor Aerial Vehicles","authors":"Jorge A. Ricardo Jr, Davi A. Santos","doi":"10.3390/drones7100611","DOIUrl":"https://doi.org/10.3390/drones7100611","url":null,"abstract":"This paper is concerned with the robust collision-free guidance and control of underactuated multirotor aerial vehicles in the presence of moving obstacles capable of accelerating, linear velocity and rotor thrust constraints, and matched model uncertainties and disturbances. We address this problem by using a hierarchical flight control architecture composed of a supervisory outer-loop guidance module and an inner-loop stabilizing control one. The inner loop is designed using a typical hierarchical control scheme that nests the attitude control loop inside the position one. The effectiveness of this scheme relies on proper time-scale separation (TSS) between the closed-loop (faster) rotational and (slower) translational dynamics, which is not straightforward to enforce in practice. However, by combining an integral sliding mode attitude control law, which guarantees instantaneous tracking of the attitude commands, with a smooth and robust position control one, we enforce, by construction, the satisfaction of the TSS, thus avoiding the loss of robustness and use of a dull trial-and-error tweak of gains. On the other hand, the outer-loop guidance is built upon the continuous-control-obstacles method, which is incremented to respect the velocity and actuator constraints and avoid multiple moving obstacles that can accelerate. The overall method is evaluated using a numerical Monte Carlo simulation and is shown to be effective in providing satisfactory tracking performance, collision-free guidance, and the satisfaction of linear velocity and actuator constraints.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135580071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Drone aerial videos offer a promising future in modern digital media and remote sensing applications, but effectively tracking several objects in these recordings is difficult. Drone aerial footage typically includes complicated sceneries with moving objects, such as people, vehicles, and animals. Complicated scenarios such as large-scale viewing angle shifts and object crossings may occur simultaneously. Random finite sets are mixed in a detection-based tracking framework, taking the object’s location and appearance into account. It maintains the detection box information of the detected object and constructs the Box-MeMBer object position prediction framework based on the MeMBer random finite set point object tracking. We develop a hierarchical connection structure in the OSNet network, build MB-OSNet to get the object appearance information, and connect feature maps of different levels through the hierarchy such that the network may obtain rich semantic information at different sizes. Similarity measurements are computed and collected for all detections and trajectories in a cost matrix that estimates the likelihood of all possible matches. The cost matrix entries compare the similarity of tracks and detections in terms of position and appearance. The DB-Tracker algorithm performs excellently in multi-target tracking of drone aerial videos, achieving MOTA of 37.4% and 46.2% on the VisDrone and UAVDT data sets, respectively. DB-Tracker achieves high robustness by comprehensively considering the object position and appearance information, especially in handling complex scenes and target occlusion. This makes DB-Tracker a powerful tool in challenging applications such as drone aerial videos.
{"title":"DB-Tracker: Multi-Object Tracking for Drone Aerial Video Based on Box-MeMBer and MB-OSNet","authors":"Yubin Yuan, Yiquan Wu, Langyue Zhao, Jinlin Chen, Qichang Zhao","doi":"10.3390/drones7100607","DOIUrl":"https://doi.org/10.3390/drones7100607","url":null,"abstract":"Drone aerial videos offer a promising future in modern digital media and remote sensing applications, but effectively tracking several objects in these recordings is difficult. Drone aerial footage typically includes complicated sceneries with moving objects, such as people, vehicles, and animals. Complicated scenarios such as large-scale viewing angle shifts and object crossings may occur simultaneously. Random finite sets are mixed in a detection-based tracking framework, taking the object’s location and appearance into account. It maintains the detection box information of the detected object and constructs the Box-MeMBer object position prediction framework based on the MeMBer random finite set point object tracking. We develop a hierarchical connection structure in the OSNet network, build MB-OSNet to get the object appearance information, and connect feature maps of different levels through the hierarchy such that the network may obtain rich semantic information at different sizes. Similarity measurements are computed and collected for all detections and trajectories in a cost matrix that estimates the likelihood of all possible matches. The cost matrix entries compare the similarity of tracks and detections in terms of position and appearance. The DB-Tracker algorithm performs excellently in multi-target tracking of drone aerial videos, achieving MOTA of 37.4% and 46.2% on the VisDrone and UAVDT data sets, respectively. DB-Tracker achieves high robustness by comprehensively considering the object position and appearance information, especially in handling complex scenes and target occlusion. This makes DB-Tracker a powerful tool in challenging applications such as drone aerial videos.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}