首页 > 最新文献

Drones最新文献

英文 中文
Integration of Unmanned Aerial Vehicle Imagery and Machine Learning Technology to Map the Distribution of Conifer and Broadleaf Canopy Cover in Uneven-Aged Mixed Forests 利用无人机图像和机器学习技术绘制非均衡树龄混交林针叶树和阔叶树树冠覆盖分布图
IF 4.8 2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-12-13 DOI: 10.3390/drones7120705
Nyo Me Htun, T. Owari, Satoshi Tsuyuki, Takuya Hiroshima
Uneven-aged mixed forests have been recognized as important contributors to biodiversity conservation, ecological stability, carbon sequestration, the provisioning of ecosystem services, and sustainable timber production. Recently, numerous studies have demonstrated the applicability of integrating remote sensing datasets with machine learning for forest management purposes, such as forest type classification and the identification of individual trees. However, studies focusing on the integration of unmanned aerial vehicle (UAV) datasets with machine learning for mapping of tree species groups in uneven-aged mixed forests remain limited. Thus, this study explored the feasibility of integrating UAV imagery with semantic segmentation-based machine learning classification algorithms to describe conifer and broadleaf species canopies in uneven-aged mixed forests. The study was conducted in two sub-compartments of the University of Tokyo Hokkaido Forest in northern Japan. We analyzed UAV images using the semantic-segmentation based U-Net and random forest (RF) classification models. The results indicate that the integration of UAV imagery with the U-Net model generated reliable conifer and broadleaf canopy cover classification maps in both sub-compartments, while the RF model often failed to distinguish conifer crowns. Moreover, our findings demonstrate the potential of this method to detect dominant tree species groups in uneven-aged mixed forests.
年龄不均的混交林被认为是生物多样性保护、生态稳定性、碳固存、提供生态系统服务和可持续木材生产的重要贡献者。最近,许多研究表明,将遥感数据集与机器学习相结合可用于森林管理,如森林类型分类和单棵树木的识别。然而,将无人机(UAV)数据集与机器学习整合用于绘制非均衡年龄混交林树种群地图的研究仍然有限。因此,本研究探讨了将无人机图像与基于语义分割的机器学习分类算法相结合,以描述不均匀年龄混交林中针叶树和阔叶树树种树冠的可行性。研究在日本北部东京大学北海道森林的两个分区进行。我们使用基于语义分割的 U-Net 和随机森林 (RF) 分类模型对无人机图像进行了分析。结果表明,将无人机图像与 U-Net 模型整合后,可在两个分区生成可靠的针叶树和阔叶树冠层覆盖分类图,而 RF 模型则经常无法区分针叶树冠。此外,我们的研究结果还证明了这种方法在不均匀年龄混交林中检测优势树种群的潜力。
{"title":"Integration of Unmanned Aerial Vehicle Imagery and Machine Learning Technology to Map the Distribution of Conifer and Broadleaf Canopy Cover in Uneven-Aged Mixed Forests","authors":"Nyo Me Htun, T. Owari, Satoshi Tsuyuki, Takuya Hiroshima","doi":"10.3390/drones7120705","DOIUrl":"https://doi.org/10.3390/drones7120705","url":null,"abstract":"Uneven-aged mixed forests have been recognized as important contributors to biodiversity conservation, ecological stability, carbon sequestration, the provisioning of ecosystem services, and sustainable timber production. Recently, numerous studies have demonstrated the applicability of integrating remote sensing datasets with machine learning for forest management purposes, such as forest type classification and the identification of individual trees. However, studies focusing on the integration of unmanned aerial vehicle (UAV) datasets with machine learning for mapping of tree species groups in uneven-aged mixed forests remain limited. Thus, this study explored the feasibility of integrating UAV imagery with semantic segmentation-based machine learning classification algorithms to describe conifer and broadleaf species canopies in uneven-aged mixed forests. The study was conducted in two sub-compartments of the University of Tokyo Hokkaido Forest in northern Japan. We analyzed UAV images using the semantic-segmentation based U-Net and random forest (RF) classification models. The results indicate that the integration of UAV imagery with the U-Net model generated reliable conifer and broadleaf canopy cover classification maps in both sub-compartments, while the RF model often failed to distinguish conifer crowns. Moreover, our findings demonstrate the potential of this method to detect dominant tree species groups in uneven-aged mixed forests.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"85 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139003949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal Model-Free Finite-Time Control Based on Terminal Sliding Mode for a Coaxial Rotor 基于终端滑动模式的同轴转子最优无模型有限时间控制
IF 4.8 2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-12-13 DOI: 10.3390/drones7120706
Hossam-Eddine Glida, C. Sentouh, J. Rath
This study focuses on addressing the tracking control problem for a coaxial unmanned aerial vehicle (UAV) without any prior knowledge of its dynamic model. To overcome the limitations of model-based control, a model-free approach based on terminal sliding mode control is proposed for achieving precise position and rotation tracking. The terminal sliding mode technique is utilized to approximate the unknown nonlinear model of the system, while the global stability with finite-time convergence of the overall system is guaranteed using the Lyapunov theory. Additionally, the selection of control parameters is addressed by incorporating the accelerated particle swarm optimization (APSO) algorithm. Finally, numerical simulation tests are provided to demonstrate the effectiveness and feasibility of the proposed design approach, which demonstrates the capability of the model-free control approach to achieve accurate tracking control even without prior knowledge of the system’s dynamic model.
本研究的重点是解决同轴无人飞行器(UAV)的跟踪控制问题,而无需事先了解其动态模型。为了克服基于模型控制的局限性,本文提出了一种基于终端滑模控制的无模型方法,以实现精确的位置和旋转跟踪。利用终端滑模技术来近似系统的未知非线性模型,同时利用 Lyapunov 理论保证整个系统的有限时间收敛全局稳定性。此外,还结合加速粒子群优化(APSO)算法解决了控制参数的选择问题。最后,还提供了数值模拟测试,以证明所提设计方法的有效性和可行性,从而证明无模型控制方法即使在事先不了解系统动态模型的情况下也能实现精确的跟踪控制。
{"title":"Optimal Model-Free Finite-Time Control Based on Terminal Sliding Mode for a Coaxial Rotor","authors":"Hossam-Eddine Glida, C. Sentouh, J. Rath","doi":"10.3390/drones7120706","DOIUrl":"https://doi.org/10.3390/drones7120706","url":null,"abstract":"This study focuses on addressing the tracking control problem for a coaxial unmanned aerial vehicle (UAV) without any prior knowledge of its dynamic model. To overcome the limitations of model-based control, a model-free approach based on terminal sliding mode control is proposed for achieving precise position and rotation tracking. The terminal sliding mode technique is utilized to approximate the unknown nonlinear model of the system, while the global stability with finite-time convergence of the overall system is guaranteed using the Lyapunov theory. Additionally, the selection of control parameters is addressed by incorporating the accelerated particle swarm optimization (APSO) algorithm. Finally, numerical simulation tests are provided to demonstrate the effectiveness and feasibility of the proposed design approach, which demonstrates the capability of the model-free control approach to achieve accurate tracking control even without prior knowledge of the system’s dynamic model.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"48 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139003234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SiamMAN: Siamese Multi-Phase Aware Network for Real-Time Unmanned Aerial Vehicle Tracking SiamMAN:用于无人机实时跟踪的暹罗多相感知网络
IF 4.8 2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-12-13 DOI: 10.3390/drones7120707
Faxue Liu, Xuan Wang, Qiqi Chen, Jinghong Liu, Chenglong Liu
In this paper, we address aerial tracking tasks by designing multi-phase aware networks to obtain rich long-range dependencies. For aerial tracking tasks, the existing methods are prone to tracking drift in scenarios with high demand for multi-layer long-range feature dependencies such as viewpoint change caused by the characteristics of the UAV shooting perspective, low resolution, etc. In contrast to the previous works that only used multi-scale feature fusion to obtain contextual information, we designed a new architecture to adapt the characteristics of different levels of features in challenging scenarios to adaptively integrate regional features and the corresponding global dependencies information. Specifically, for the proposed tracker (SiamMAN), we first propose a two-stage aware neck (TAN), where first a cascaded splitting encoder (CSE) is used to obtain the distributed long-range relevance among the sub-branches by the splitting of feature channels, and then a multi-level contextual decoder (MCD) is used to achieve further global dependency fusion. Finally, we design the response map context encoder (RCE) utilizing long-range contextual information in backpropagation to accomplish pixel-level updating for the deeper features and better balance the semantic and spatial information. Several experiments on well-known tracking benchmarks illustrate that the proposed method outperforms SOTA trackers, which results from the effective utilization of the proposed multi-phase aware network for different levels of features.
本文通过设计多相感知网络来获取丰富的长距离依赖关系,从而解决航拍跟踪任务。对于航拍跟踪任务,现有方法在对多层长距离特征依赖性要求较高的场景中容易出现跟踪漂移,如无人机拍摄视角特征引起的视点变化、低分辨率等。与以往仅利用多尺度特征融合获取上下文信息的研究相比,我们设计了一种新的架构,以适应挑战性场景中不同层次特征的特点,自适应地融合区域特征和相应的全局依赖信息。具体来说,对于所提出的跟踪器(SiamMAN),我们首先提出了两级感知颈(TAN),其中首先使用级联分割编码器(CSE)通过特征通道的分割获得子分支间的分布式远距离相关性,然后使用多级上下文解码器(MCD)实现进一步的全局依赖性融合。最后,我们设计了响应图上下文编码器(RCE),利用反向传播中的长距离上下文信息来完成深层特征的像素级更新,从而更好地平衡语义和空间信息。在一些著名的跟踪基准上进行的实验表明,所提出的方法优于 SOTA 跟踪器,这得益于所提出的多阶段感知网络对不同层次特征的有效利用。
{"title":"SiamMAN: Siamese Multi-Phase Aware Network for Real-Time Unmanned Aerial Vehicle Tracking","authors":"Faxue Liu, Xuan Wang, Qiqi Chen, Jinghong Liu, Chenglong Liu","doi":"10.3390/drones7120707","DOIUrl":"https://doi.org/10.3390/drones7120707","url":null,"abstract":"In this paper, we address aerial tracking tasks by designing multi-phase aware networks to obtain rich long-range dependencies. For aerial tracking tasks, the existing methods are prone to tracking drift in scenarios with high demand for multi-layer long-range feature dependencies such as viewpoint change caused by the characteristics of the UAV shooting perspective, low resolution, etc. In contrast to the previous works that only used multi-scale feature fusion to obtain contextual information, we designed a new architecture to adapt the characteristics of different levels of features in challenging scenarios to adaptively integrate regional features and the corresponding global dependencies information. Specifically, for the proposed tracker (SiamMAN), we first propose a two-stage aware neck (TAN), where first a cascaded splitting encoder (CSE) is used to obtain the distributed long-range relevance among the sub-branches by the splitting of feature channels, and then a multi-level contextual decoder (MCD) is used to achieve further global dependency fusion. Finally, we design the response map context encoder (RCE) utilizing long-range contextual information in backpropagation to accomplish pixel-level updating for the deeper features and better balance the semantic and spatial information. Several experiments on well-known tracking benchmarks illustrate that the proposed method outperforms SOTA trackers, which results from the effective utilization of the proposed multi-phase aware network for different levels of features.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"55 7","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139003705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imitation Learning of Complex Behaviors for Multiple Drones with Limited Vision 视觉受限的多架无人机复杂行为的模仿学习
IF 4.8 2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-12-13 DOI: 10.3390/drones7120704
Yu Wan, Jun Tang, Zipeng Zhao
Navigating multiple drones autonomously in complex and unpredictable environments, such as forests, poses a significant challenge typically addressed by wireless communication for coordination. However, this approach falls short in situations with limited central control or blocked communications. Addressing this gap, our paper explores the learning of complex behaviors by multiple drones with limited vision. Drones in a swarm rely on onboard sensors, primarily forward-facing stereo cameras, for environmental perception and neighbor detection. They learn complex maneuvers through the imitation of a privileged expert system, which involves finding the optimal set of neural network parameters to enable the most effective mapping from sensory perception to control commands. The training process adopts the Dagger algorithm, employing the framework of centralized training with decentralized execution. Using this technique, drones rapidly learn complex behaviors, such as avoiding obstacles, coordinating movements, and navigating to specified targets, all in the absence of wireless communication. This paper details the construction of a distributed multi-UAV cooperative motion model under limited vision, emphasizing the autonomy of each drone in achieving coordinated flight and obstacle avoidance. Our methodological approach and experimental results validate the effectiveness of the proposed vision-based end-to-end controller, paving the way for more sophisticated applications of multi-UAV systems in intricate, real-world scenarios.
在复杂和不可预测的环境(如森林)中,多架无人机的自主导航是一项重大挑战,通常采用无线通信进行协调。然而,在中央控制有限或通信受阻的情况下,这种方法就显得力不从心了。针对这一缺陷,我们的论文探讨了多架视觉有限的无人机学习复杂行为的问题。蜂群中的无人机依靠机载传感器(主要是面向前方的立体摄像头)进行环境感知和邻居探测。它们通过模仿一个特权专家系统来学习复杂的动作,这涉及到寻找最优的神经网络参数集,以实现从感官感知到控制指令的最有效映射。训练过程采用 Dagger 算法,采用集中训练、分散执行的框架。利用这种技术,无人机可以在没有无线通信的情况下快速学习复杂的行为,如避开障碍物、协调运动、向指定目标导航等。本文详细介绍了有限视觉下分布式多无人机合作运动模型的构建,强调了每架无人机在实现协调飞行和避障时的自主性。我们的方法论和实验结果验证了所提出的基于视觉的端到端控制器的有效性,为多无人机系统在错综复杂的现实世界场景中的更复杂应用铺平了道路。
{"title":"Imitation Learning of Complex Behaviors for Multiple Drones with Limited Vision","authors":"Yu Wan, Jun Tang, Zipeng Zhao","doi":"10.3390/drones7120704","DOIUrl":"https://doi.org/10.3390/drones7120704","url":null,"abstract":"Navigating multiple drones autonomously in complex and unpredictable environments, such as forests, poses a significant challenge typically addressed by wireless communication for coordination. However, this approach falls short in situations with limited central control or blocked communications. Addressing this gap, our paper explores the learning of complex behaviors by multiple drones with limited vision. Drones in a swarm rely on onboard sensors, primarily forward-facing stereo cameras, for environmental perception and neighbor detection. They learn complex maneuvers through the imitation of a privileged expert system, which involves finding the optimal set of neural network parameters to enable the most effective mapping from sensory perception to control commands. The training process adopts the Dagger algorithm, employing the framework of centralized training with decentralized execution. Using this technique, drones rapidly learn complex behaviors, such as avoiding obstacles, coordinating movements, and navigating to specified targets, all in the absence of wireless communication. This paper details the construction of a distributed multi-UAV cooperative motion model under limited vision, emphasizing the autonomy of each drone in achieving coordinated flight and obstacle avoidance. Our methodological approach and experimental results validate the effectiveness of the proposed vision-based end-to-end controller, paving the way for more sophisticated applications of multi-UAV systems in intricate, real-world scenarios.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"52 8","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139003716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Air-to-Ground Path Loss Model at 3.6 GHz under Agricultural Scenarios Based on Measurements and Artificial Neural Networks 基于测量和人工神经网络的农业场景下 3.6 GHz 的空地路径损耗模型
IF 4.8 2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-12-11 DOI: 10.3390/drones7120701
Hanpeng Li, Kai Mao, Xuchao Ye, Taotao Zhang, Qiuming Zhu, Manxi Wang, Yurao Ge, Hangang Li, Farman Ali
Unmanned aerial vehicles (UAVs) have found expanding utilization in smart agriculture. Path loss (PL) is of significant importance in the link budget of UAV-aided air-to-ground (A2G) communications. This paper proposes a machine-learning-based PL model for A2G communication in agricultural scenarios. On this basis, a double-weight neurons-based artificial neural network (DWN-ANN) is proposed, which can strike a fine equilibrium between the amount of measurement data and the accuracy of predictions by using ray tracing (RT) simulation data for pre-training and measurement data for optimization training. Moreover, an RT pre-correction module is introduced into the DWN-ANN to optimize the impact of varying farmland materials on the accuracy of RT simulation, thereby improving the accuracy of RT simulation data. Finally, channel measurement campaigns are carried out over a farmland area at 3.6 GHz, and the measurement data are used for the training and validation of the proposed DWN-ANN. The prediction results of the proposed PL model demonstrate a fine concordance with the measurement data and are better than the traditional empirical models.
无人飞行器(UAV)在智能农业领域的应用日益广泛。路径损耗(PL)对无人机辅助空对地(A2G)通信的链路预算具有重要意义。本文针对农业场景中的 A2G 通信提出了一种基于机器学习的路径损耗模型。在此基础上,提出了基于双权重神经元的人工神经网络(DWN-ANN),通过使用光线跟踪(RT)仿真数据进行预训练,使用测量数据进行优化训练,从而在测量数据量和预测精度之间取得微妙的平衡。此外,DWN-ANN 还引入了 RT 预修正模块,以优化不同农田材料对 RT 模拟精度的影响,从而提高 RT 模拟数据的精度。最后,在 3.6 GHz 的农田区域开展了信道测量活动,测量数据用于训练和验证所提出的 DWN-ANN。所提出的 PL 模型的预测结果与测量数据十分吻合,优于传统的经验模型。
{"title":"Air-to-Ground Path Loss Model at 3.6 GHz under Agricultural Scenarios Based on Measurements and Artificial Neural Networks","authors":"Hanpeng Li, Kai Mao, Xuchao Ye, Taotao Zhang, Qiuming Zhu, Manxi Wang, Yurao Ge, Hangang Li, Farman Ali","doi":"10.3390/drones7120701","DOIUrl":"https://doi.org/10.3390/drones7120701","url":null,"abstract":"Unmanned aerial vehicles (UAVs) have found expanding utilization in smart agriculture. Path loss (PL) is of significant importance in the link budget of UAV-aided air-to-ground (A2G) communications. This paper proposes a machine-learning-based PL model for A2G communication in agricultural scenarios. On this basis, a double-weight neurons-based artificial neural network (DWN-ANN) is proposed, which can strike a fine equilibrium between the amount of measurement data and the accuracy of predictions by using ray tracing (RT) simulation data for pre-training and measurement data for optimization training. Moreover, an RT pre-correction module is introduced into the DWN-ANN to optimize the impact of varying farmland materials on the accuracy of RT simulation, thereby improving the accuracy of RT simulation data. Finally, channel measurement campaigns are carried out over a farmland area at 3.6 GHz, and the measurement data are used for the training and validation of the proposed DWN-ANN. The prediction results of the proposed PL model demonstrate a fine concordance with the measurement data and are better than the traditional empirical models.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"128 3","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138981601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of the Impact of Structural Parameter Changes on the Overall Aerodynamic Characteristics of Ducted UAVs 结构参数变化对风道式无人机整体气动特性的影响分析
IF 4.8 2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-12-11 DOI: 10.3390/drones7120702
Huarui Xv, Lei Zhao, Mingjian Wu, Kun Liu, Hongyue Zhang, Zhilin Wu
Ducted UAVs have attracted much attention because the duct structure can reduce the propeller tip vortices and thus increase the effective lift area of the lower propeller. This paper investigates the effects of parameters on the aerodynamic characteristics of ducted UAVs, such as co-axial twin propeller configuration and duct structure. The aerodynamic characteristics of the UAV were analyzed using CFD methods, while the impact sensitivity analysis of the simulation data was sorted using the orthogonal test method. The results indicate that, while maintaining overall strength, increasing the propeller spacing by about 0.055 times the duct chord length can increase the lift of the upper propeller by approximately 1.3% faster. Reducing the distance between the propeller and the top surface of the duct by about 0.5 times the duct chord length can increase the lift of the lower propeller by approximately 7.7%. Increasing the chord length of the duct cross-section by about 35.3% can simultaneously make the structure of the duct and the total lift of the drone faster by approximately 150.6% and 15.7%, respectively. This research provides valuable guidance and reference for the subsequent overall design of ducted UAVs.
导管式无人机备受关注,因为导管结构可以减少螺旋桨翼尖涡流,从而增加下部螺旋桨的有效升力面积。本文研究了同轴双螺旋桨配置和风道结构等参数对风道式无人机气动特性的影响。采用 CFD 方法分析了无人机的气动特性,同时采用正交试验方法对模拟数据进行了冲击敏感性分析。结果表明,在保持整体强度的前提下,将螺旋桨间距增加约 0.055 倍风道弦长,可使上部螺旋桨的升力提高约 1.3%。将螺旋桨与风道顶面之间的距离减少约 0.5 倍风道弦长,可使下部螺旋桨的升力增加约 7.7%。将风道横截面弦长增加约 35.3%,可同时使风道结构和无人机总升力分别加快约 150.6% 和 15.7%。这项研究为后续风道式无人机的整体设计提供了宝贵的指导和参考。
{"title":"Analysis of the Impact of Structural Parameter Changes on the Overall Aerodynamic Characteristics of Ducted UAVs","authors":"Huarui Xv, Lei Zhao, Mingjian Wu, Kun Liu, Hongyue Zhang, Zhilin Wu","doi":"10.3390/drones7120702","DOIUrl":"https://doi.org/10.3390/drones7120702","url":null,"abstract":"Ducted UAVs have attracted much attention because the duct structure can reduce the propeller tip vortices and thus increase the effective lift area of the lower propeller. This paper investigates the effects of parameters on the aerodynamic characteristics of ducted UAVs, such as co-axial twin propeller configuration and duct structure. The aerodynamic characteristics of the UAV were analyzed using CFD methods, while the impact sensitivity analysis of the simulation data was sorted using the orthogonal test method. The results indicate that, while maintaining overall strength, increasing the propeller spacing by about 0.055 times the duct chord length can increase the lift of the upper propeller by approximately 1.3% faster. Reducing the distance between the propeller and the top surface of the duct by about 0.5 times the duct chord length can increase the lift of the lower propeller by approximately 7.7%. Increasing the chord length of the duct cross-section by about 35.3% can simultaneously make the structure of the duct and the total lift of the drone faster by approximately 150.6% and 15.7%, respectively. This research provides valuable guidance and reference for the subsequent overall design of ducted UAVs.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"18 4","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138979053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Attention-Based Odometry Framework for Multisensory Unmanned Ground Vehicles (UGVs) 基于注意力的多感知无人地面飞行器(UGV)测距框架
IF 4.8 2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-12-09 DOI: 10.3390/drones7120699
Zhiyao Xiao, Guobao Zhang
Recently, deep learning methods and multisensory fusion have been applied to address odometry challenges in unmanned ground vehicles (UGVs). In this paper, we propose an end-to-end visual-lidar-inertial odometry framework to enhance the accuracy of pose estimation. Grayscale images, 3D point clouds, and inertial data are used as inputs to overcome the limitations of a single sensor. Convolutional neural network (CNN) and recurrent neural network (RNN) are employed as encoders for different sensor modalities. In contrast to previous multisensory odometry methods, our framework introduces a novel attention-based fusion module that remaps feature vectors to adapt to various scenes. Evaluations on the Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI) odometry benchmark demonstrate the effectiveness of our framework.
最近,深度学习方法和多感官融合被应用于解决无人地面车辆(UGV)的里程测量难题。在本文中,我们提出了一种端到端的视觉-激光雷达-惯性里程测量框架,以提高姿态估计的准确性。灰度图像、三维点云和惯性数据被用作输入,以克服单一传感器的局限性。卷积神经网络(CNN)和递归神经网络(RNN)被用作不同传感器模式的编码器。与以往的多感官里程测量方法不同,我们的框架引入了一种新颖的基于注意力的融合模块,可重新映射特征向量以适应各种场景。卡尔斯鲁厄理工学院和芝加哥丰田技术学院(KITTI)的里程测量基准评估证明了我们框架的有效性。
{"title":"An Attention-Based Odometry Framework for Multisensory Unmanned Ground Vehicles (UGVs)","authors":"Zhiyao Xiao, Guobao Zhang","doi":"10.3390/drones7120699","DOIUrl":"https://doi.org/10.3390/drones7120699","url":null,"abstract":"Recently, deep learning methods and multisensory fusion have been applied to address odometry challenges in unmanned ground vehicles (UGVs). In this paper, we propose an end-to-end visual-lidar-inertial odometry framework to enhance the accuracy of pose estimation. Grayscale images, 3D point clouds, and inertial data are used as inputs to overcome the limitations of a single sensor. Convolutional neural network (CNN) and recurrent neural network (RNN) are employed as encoders for different sensor modalities. In contrast to previous multisensory odometry methods, our framework introduces a novel attention-based fusion module that remaps feature vectors to adapt to various scenes. Evaluations on the Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI) odometry benchmark demonstrate the effectiveness of our framework.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"565 ","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138983155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fixed-Time Extended Observer-Based Adaptive Sliding Mode Control for a Quadrotor UAV under Severe Turbulent Wind 基于固定时间扩展观测器的四旋翼无人机自适应滑模控制,适用于恶劣湍流风环境
IF 4.8 2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-12-09 DOI: 10.3390/drones7120700
Armando Miranda-Moya, H. Castañeda, Hesheng Wang
This paper presents a fixed-time extended state observer-based adaptive sliding mode controller evaluated in a quadrotor unmanned aerial vehicle subject to severe turbulent wind while executing a desired trajectory. Since both the state and model of the system are assumed to be partially known, the observer, whose convergence is independent from the initial states of the system, estimates the full state, model uncertainties, and the effects of turbulent wind in fixed time. Such information is then compensated via feedback control conducted by a class of adaptive sliding mode controller, which is robust to perturbations and reduces the chattering effect by non-overestimating its adaptive gain. Furthermore, the stability of the closed-loop system is analyzed by means of the Lyapunov theory. Finally, simulation results validate the feasibility and advantages of the proposed strategy, where the observer enhances performance. For further demonstration, a comparison with an existent approach is provided.
本文介绍了一种基于固定时间扩展状态观测器的自适应滑模控制器,该控制器在四旋翼无人驾驶飞行器上进行评估,该飞行器在执行所需的轨迹时受到严重的湍流风的影响。由于假定系统的状态和模型都是部分已知的,因此观测器的收敛性与系统的初始状态无关,它能在固定时间内估计出完整的状态、模型的不确定性以及湍流风的影响。然后通过一类自适应滑模控制器进行反馈控制来补偿这些信息,该控制器对扰动具有鲁棒性,并通过不高估其自适应增益来减少颤振效应。此外,还通过 Lyapunov 理论分析了闭环系统的稳定性。最后,仿真结果验证了所提策略的可行性和优势,其中观测器提高了性能。为了进一步证明,还提供了与现有方法的比较。
{"title":"Fixed-Time Extended Observer-Based Adaptive Sliding Mode Control for a Quadrotor UAV under Severe Turbulent Wind","authors":"Armando Miranda-Moya, H. Castañeda, Hesheng Wang","doi":"10.3390/drones7120700","DOIUrl":"https://doi.org/10.3390/drones7120700","url":null,"abstract":"This paper presents a fixed-time extended state observer-based adaptive sliding mode controller evaluated in a quadrotor unmanned aerial vehicle subject to severe turbulent wind while executing a desired trajectory. Since both the state and model of the system are assumed to be partially known, the observer, whose convergence is independent from the initial states of the system, estimates the full state, model uncertainties, and the effects of turbulent wind in fixed time. Such information is then compensated via feedback control conducted by a class of adaptive sliding mode controller, which is robust to perturbations and reduces the chattering effect by non-overestimating its adaptive gain. Furthermore, the stability of the closed-loop system is analyzed by means of the Lyapunov theory. Finally, simulation results validate the feasibility and advantages of the proposed strategy, where the observer enhances performance. For further demonstration, a comparison with an existent approach is provided.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"321 2","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138983313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Commonality Evaluation and Prediction Study of Light and Small Multi-Rotor UAVs 轻型和小型多旋翼无人机共性评估和预测研究
IF 4.8 2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-12-08 DOI: 10.3390/drones7120698
Yongjie Zhang, Yongqi Zeng, K. Cao
Light small-sized, multi-rotor UAVs, with their notable advantages of portability, intelligence, and low cost, occupy a significant share in the civilian UAV market. To further reduce the full lifecycle cost of products, shorten development cycles, and increase market share, some manufacturers of these UAVs have adopted a series development strategy based on the concept of commonality in design. However, there is currently a lack of effective methods to quantify the commonality in UAV designs, which is key to guiding commonality design. In view of this, our study innovatively proposes a new UAV commonality evaluation model based on the basic composition of light small-sized multi-rotor UAVs and the theory of design structure matrices. Through cross-evaluations of four models, the model has been confirmed to comprehensively quantify the degree of commonality between models. To achieve commonality prediction in the early stages of multi-rotor UAV design, we constructed a commonality prediction dataset centered around the commonality evaluation model using data from typical light small-sized multi-rotor UAV models. After training this dataset with convolutional neural networks, we successfully developed an effective predictive model for the commonality of new light small-sized multi-rotor UAV models and verified the feasibility and effectiveness of this method through a case application in UAV design. The commonality evaluation and prediction models established in this study not only provide strong decision-making support for the series design and commonality design of UAV products but also offer new perspectives and tools for strategic development in this field.
轻型小型多旋翼无人机以其便携、智能、低成本等显著优势,在民用无人机市场占据了相当大的份额。为了进一步降低产品的全生命周期成本,缩短开发周期,提高市场占有率,一些无人机制造商采用了基于设计共性概念的系列化开发策略。然而,目前缺乏量化无人机设计共性的有效方法,而共性是指导共性设计的关键。有鉴于此,我们的研究基于轻型小型多旋翼无人机的基本构成和设计结构矩阵理论,创新性地提出了一种新的无人机共性评价模型。通过对四种机型的交叉评估,证实该模型能全面量化机型间的通用性程度。为实现多旋翼无人机设计初期的共性预测,我们利用典型轻小型多旋翼无人机模型数据,构建了以共性评价模型为核心的共性预测数据集。在使用卷积神经网络对该数据集进行训练后,我们成功建立了一个有效的轻小型多旋翼无人机新机型通用性预测模型,并通过无人机设计中的案例应用验证了该方法的可行性和有效性。本研究建立的共性评价和预测模型不仅为无人机产品的系列化设计和共性设计提供了有力的决策支持,也为该领域的战略发展提供了新的视角和工具。
{"title":"Commonality Evaluation and Prediction Study of Light and Small Multi-Rotor UAVs","authors":"Yongjie Zhang, Yongqi Zeng, K. Cao","doi":"10.3390/drones7120698","DOIUrl":"https://doi.org/10.3390/drones7120698","url":null,"abstract":"Light small-sized, multi-rotor UAVs, with their notable advantages of portability, intelligence, and low cost, occupy a significant share in the civilian UAV market. To further reduce the full lifecycle cost of products, shorten development cycles, and increase market share, some manufacturers of these UAVs have adopted a series development strategy based on the concept of commonality in design. However, there is currently a lack of effective methods to quantify the commonality in UAV designs, which is key to guiding commonality design. In view of this, our study innovatively proposes a new UAV commonality evaluation model based on the basic composition of light small-sized multi-rotor UAVs and the theory of design structure matrices. Through cross-evaluations of four models, the model has been confirmed to comprehensively quantify the degree of commonality between models. To achieve commonality prediction in the early stages of multi-rotor UAV design, we constructed a commonality prediction dataset centered around the commonality evaluation model using data from typical light small-sized multi-rotor UAV models. After training this dataset with convolutional neural networks, we successfully developed an effective predictive model for the commonality of new light small-sized multi-rotor UAV models and verified the feasibility and effectiveness of this method through a case application in UAV design. The commonality evaluation and prediction models established in this study not only provide strong decision-making support for the series design and commonality design of UAV products but also offer new perspectives and tools for strategic development in this field.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"235 ","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139011309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Adversarial Detection Method for UAV Vision Systems via Attribution Maps 通过归因图实现无人机视觉系统的新型对抗检测方法
IF 4.8 2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-12-07 DOI: 10.3390/drones7120697
Zhun Zhang, Qihe Liu, Chunjiang Wu, Shijie Zhou, Zhangbao Yan
With the rapid advancement of unmanned aerial vehicles (UAVs) and the Internet of Things (IoTs), UAV-assisted IoTs has become integral in areas such as wildlife monitoring, disaster surveillance, and search and rescue operations. However, recent studies have shown that these systems are vulnerable to adversarial example attacks during data collection and transmission. These attacks subtly alter input data to trick UAV-based deep learning vision systems, significantly compromising the reliability and security of IoTs systems. Consequently, various methods have been developed to identify adversarial examples within model inputs, but they often lack accuracy against complex attacks like C&W and others. Drawing inspiration from model visualization technology, we observed that adversarial perturbations markedly alter the attribution maps of clean examples. This paper introduces a new, effective detection method for UAV vision systems that uses attribution maps created by model visualization techniques. The method differentiates between genuine and adversarial examples by extracting their unique attribution maps and then training a classifier on these maps. Validation experiments on the ImageNet dataset showed that our method achieves an average detection accuracy of 99.58%, surpassing the state-of-the-art methods.
随着无人机(uav)和物联网(iot)的快速发展,无人机辅助物联网已成为野生动物监测、灾害监测和搜救行动等领域不可或缺的一部分。然而,最近的研究表明,这些系统在数据收集和传输过程中容易受到对抗性示例攻击。这些攻击巧妙地改变了输入数据,欺骗了基于无人机的深度学习视觉系统,严重损害了物联网系统的可靠性和安全性。因此,已经开发了各种方法来识别模型输入中的对抗性示例,但它们通常缺乏对C&W等复杂攻击的准确性。从模型可视化技术中获得灵感,我们观察到对抗性扰动显著地改变了干净样本的归因图。本文介绍了一种利用模型可视化技术生成的属性图对无人机视觉系统进行有效检测的新方法。该方法通过提取真实示例和对抗示例的唯一属性图,然后在这些图上训练分类器来区分真实示例和对抗示例。在ImageNet数据集上的验证实验表明,我们的方法平均检测准确率达到99.58%,超过了目前最先进的方法。
{"title":"A Novel Adversarial Detection Method for UAV Vision Systems via Attribution Maps","authors":"Zhun Zhang, Qihe Liu, Chunjiang Wu, Shijie Zhou, Zhangbao Yan","doi":"10.3390/drones7120697","DOIUrl":"https://doi.org/10.3390/drones7120697","url":null,"abstract":"With the rapid advancement of unmanned aerial vehicles (UAVs) and the Internet of Things (IoTs), UAV-assisted IoTs has become integral in areas such as wildlife monitoring, disaster surveillance, and search and rescue operations. However, recent studies have shown that these systems are vulnerable to adversarial example attacks during data collection and transmission. These attacks subtly alter input data to trick UAV-based deep learning vision systems, significantly compromising the reliability and security of IoTs systems. Consequently, various methods have been developed to identify adversarial examples within model inputs, but they often lack accuracy against complex attacks like C&W and others. Drawing inspiration from model visualization technology, we observed that adversarial perturbations markedly alter the attribution maps of clean examples. This paper introduces a new, effective detection method for UAV vision systems that uses attribution maps created by model visualization techniques. The method differentiates between genuine and adversarial examples by extracting their unique attribution maps and then training a classifier on these maps. Validation experiments on the ImageNet dataset showed that our method achieves an average detection accuracy of 99.58%, surpassing the state-of-the-art methods.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"52 49","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138592999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Drones
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1