Nyo Me Htun, T. Owari, Satoshi Tsuyuki, Takuya Hiroshima
Uneven-aged mixed forests have been recognized as important contributors to biodiversity conservation, ecological stability, carbon sequestration, the provisioning of ecosystem services, and sustainable timber production. Recently, numerous studies have demonstrated the applicability of integrating remote sensing datasets with machine learning for forest management purposes, such as forest type classification and the identification of individual trees. However, studies focusing on the integration of unmanned aerial vehicle (UAV) datasets with machine learning for mapping of tree species groups in uneven-aged mixed forests remain limited. Thus, this study explored the feasibility of integrating UAV imagery with semantic segmentation-based machine learning classification algorithms to describe conifer and broadleaf species canopies in uneven-aged mixed forests. The study was conducted in two sub-compartments of the University of Tokyo Hokkaido Forest in northern Japan. We analyzed UAV images using the semantic-segmentation based U-Net and random forest (RF) classification models. The results indicate that the integration of UAV imagery with the U-Net model generated reliable conifer and broadleaf canopy cover classification maps in both sub-compartments, while the RF model often failed to distinguish conifer crowns. Moreover, our findings demonstrate the potential of this method to detect dominant tree species groups in uneven-aged mixed forests.
{"title":"Integration of Unmanned Aerial Vehicle Imagery and Machine Learning Technology to Map the Distribution of Conifer and Broadleaf Canopy Cover in Uneven-Aged Mixed Forests","authors":"Nyo Me Htun, T. Owari, Satoshi Tsuyuki, Takuya Hiroshima","doi":"10.3390/drones7120705","DOIUrl":"https://doi.org/10.3390/drones7120705","url":null,"abstract":"Uneven-aged mixed forests have been recognized as important contributors to biodiversity conservation, ecological stability, carbon sequestration, the provisioning of ecosystem services, and sustainable timber production. Recently, numerous studies have demonstrated the applicability of integrating remote sensing datasets with machine learning for forest management purposes, such as forest type classification and the identification of individual trees. However, studies focusing on the integration of unmanned aerial vehicle (UAV) datasets with machine learning for mapping of tree species groups in uneven-aged mixed forests remain limited. Thus, this study explored the feasibility of integrating UAV imagery with semantic segmentation-based machine learning classification algorithms to describe conifer and broadleaf species canopies in uneven-aged mixed forests. The study was conducted in two sub-compartments of the University of Tokyo Hokkaido Forest in northern Japan. We analyzed UAV images using the semantic-segmentation based U-Net and random forest (RF) classification models. The results indicate that the integration of UAV imagery with the U-Net model generated reliable conifer and broadleaf canopy cover classification maps in both sub-compartments, while the RF model often failed to distinguish conifer crowns. Moreover, our findings demonstrate the potential of this method to detect dominant tree species groups in uneven-aged mixed forests.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"85 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139003949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study focuses on addressing the tracking control problem for a coaxial unmanned aerial vehicle (UAV) without any prior knowledge of its dynamic model. To overcome the limitations of model-based control, a model-free approach based on terminal sliding mode control is proposed for achieving precise position and rotation tracking. The terminal sliding mode technique is utilized to approximate the unknown nonlinear model of the system, while the global stability with finite-time convergence of the overall system is guaranteed using the Lyapunov theory. Additionally, the selection of control parameters is addressed by incorporating the accelerated particle swarm optimization (APSO) algorithm. Finally, numerical simulation tests are provided to demonstrate the effectiveness and feasibility of the proposed design approach, which demonstrates the capability of the model-free control approach to achieve accurate tracking control even without prior knowledge of the system’s dynamic model.
{"title":"Optimal Model-Free Finite-Time Control Based on Terminal Sliding Mode for a Coaxial Rotor","authors":"Hossam-Eddine Glida, C. Sentouh, J. Rath","doi":"10.3390/drones7120706","DOIUrl":"https://doi.org/10.3390/drones7120706","url":null,"abstract":"This study focuses on addressing the tracking control problem for a coaxial unmanned aerial vehicle (UAV) without any prior knowledge of its dynamic model. To overcome the limitations of model-based control, a model-free approach based on terminal sliding mode control is proposed for achieving precise position and rotation tracking. The terminal sliding mode technique is utilized to approximate the unknown nonlinear model of the system, while the global stability with finite-time convergence of the overall system is guaranteed using the Lyapunov theory. Additionally, the selection of control parameters is addressed by incorporating the accelerated particle swarm optimization (APSO) algorithm. Finally, numerical simulation tests are provided to demonstrate the effectiveness and feasibility of the proposed design approach, which demonstrates the capability of the model-free control approach to achieve accurate tracking control even without prior knowledge of the system’s dynamic model.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"48 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139003234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Faxue Liu, Xuan Wang, Qiqi Chen, Jinghong Liu, Chenglong Liu
In this paper, we address aerial tracking tasks by designing multi-phase aware networks to obtain rich long-range dependencies. For aerial tracking tasks, the existing methods are prone to tracking drift in scenarios with high demand for multi-layer long-range feature dependencies such as viewpoint change caused by the characteristics of the UAV shooting perspective, low resolution, etc. In contrast to the previous works that only used multi-scale feature fusion to obtain contextual information, we designed a new architecture to adapt the characteristics of different levels of features in challenging scenarios to adaptively integrate regional features and the corresponding global dependencies information. Specifically, for the proposed tracker (SiamMAN), we first propose a two-stage aware neck (TAN), where first a cascaded splitting encoder (CSE) is used to obtain the distributed long-range relevance among the sub-branches by the splitting of feature channels, and then a multi-level contextual decoder (MCD) is used to achieve further global dependency fusion. Finally, we design the response map context encoder (RCE) utilizing long-range contextual information in backpropagation to accomplish pixel-level updating for the deeper features and better balance the semantic and spatial information. Several experiments on well-known tracking benchmarks illustrate that the proposed method outperforms SOTA trackers, which results from the effective utilization of the proposed multi-phase aware network for different levels of features.
本文通过设计多相感知网络来获取丰富的长距离依赖关系,从而解决航拍跟踪任务。对于航拍跟踪任务,现有方法在对多层长距离特征依赖性要求较高的场景中容易出现跟踪漂移,如无人机拍摄视角特征引起的视点变化、低分辨率等。与以往仅利用多尺度特征融合获取上下文信息的研究相比,我们设计了一种新的架构,以适应挑战性场景中不同层次特征的特点,自适应地融合区域特征和相应的全局依赖信息。具体来说,对于所提出的跟踪器(SiamMAN),我们首先提出了两级感知颈(TAN),其中首先使用级联分割编码器(CSE)通过特征通道的分割获得子分支间的分布式远距离相关性,然后使用多级上下文解码器(MCD)实现进一步的全局依赖性融合。最后,我们设计了响应图上下文编码器(RCE),利用反向传播中的长距离上下文信息来完成深层特征的像素级更新,从而更好地平衡语义和空间信息。在一些著名的跟踪基准上进行的实验表明,所提出的方法优于 SOTA 跟踪器,这得益于所提出的多阶段感知网络对不同层次特征的有效利用。
{"title":"SiamMAN: Siamese Multi-Phase Aware Network for Real-Time Unmanned Aerial Vehicle Tracking","authors":"Faxue Liu, Xuan Wang, Qiqi Chen, Jinghong Liu, Chenglong Liu","doi":"10.3390/drones7120707","DOIUrl":"https://doi.org/10.3390/drones7120707","url":null,"abstract":"In this paper, we address aerial tracking tasks by designing multi-phase aware networks to obtain rich long-range dependencies. For aerial tracking tasks, the existing methods are prone to tracking drift in scenarios with high demand for multi-layer long-range feature dependencies such as viewpoint change caused by the characteristics of the UAV shooting perspective, low resolution, etc. In contrast to the previous works that only used multi-scale feature fusion to obtain contextual information, we designed a new architecture to adapt the characteristics of different levels of features in challenging scenarios to adaptively integrate regional features and the corresponding global dependencies information. Specifically, for the proposed tracker (SiamMAN), we first propose a two-stage aware neck (TAN), where first a cascaded splitting encoder (CSE) is used to obtain the distributed long-range relevance among the sub-branches by the splitting of feature channels, and then a multi-level contextual decoder (MCD) is used to achieve further global dependency fusion. Finally, we design the response map context encoder (RCE) utilizing long-range contextual information in backpropagation to accomplish pixel-level updating for the deeper features and better balance the semantic and spatial information. Several experiments on well-known tracking benchmarks illustrate that the proposed method outperforms SOTA trackers, which results from the effective utilization of the proposed multi-phase aware network for different levels of features.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"55 7","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139003705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Navigating multiple drones autonomously in complex and unpredictable environments, such as forests, poses a significant challenge typically addressed by wireless communication for coordination. However, this approach falls short in situations with limited central control or blocked communications. Addressing this gap, our paper explores the learning of complex behaviors by multiple drones with limited vision. Drones in a swarm rely on onboard sensors, primarily forward-facing stereo cameras, for environmental perception and neighbor detection. They learn complex maneuvers through the imitation of a privileged expert system, which involves finding the optimal set of neural network parameters to enable the most effective mapping from sensory perception to control commands. The training process adopts the Dagger algorithm, employing the framework of centralized training with decentralized execution. Using this technique, drones rapidly learn complex behaviors, such as avoiding obstacles, coordinating movements, and navigating to specified targets, all in the absence of wireless communication. This paper details the construction of a distributed multi-UAV cooperative motion model under limited vision, emphasizing the autonomy of each drone in achieving coordinated flight and obstacle avoidance. Our methodological approach and experimental results validate the effectiveness of the proposed vision-based end-to-end controller, paving the way for more sophisticated applications of multi-UAV systems in intricate, real-world scenarios.
{"title":"Imitation Learning of Complex Behaviors for Multiple Drones with Limited Vision","authors":"Yu Wan, Jun Tang, Zipeng Zhao","doi":"10.3390/drones7120704","DOIUrl":"https://doi.org/10.3390/drones7120704","url":null,"abstract":"Navigating multiple drones autonomously in complex and unpredictable environments, such as forests, poses a significant challenge typically addressed by wireless communication for coordination. However, this approach falls short in situations with limited central control or blocked communications. Addressing this gap, our paper explores the learning of complex behaviors by multiple drones with limited vision. Drones in a swarm rely on onboard sensors, primarily forward-facing stereo cameras, for environmental perception and neighbor detection. They learn complex maneuvers through the imitation of a privileged expert system, which involves finding the optimal set of neural network parameters to enable the most effective mapping from sensory perception to control commands. The training process adopts the Dagger algorithm, employing the framework of centralized training with decentralized execution. Using this technique, drones rapidly learn complex behaviors, such as avoiding obstacles, coordinating movements, and navigating to specified targets, all in the absence of wireless communication. This paper details the construction of a distributed multi-UAV cooperative motion model under limited vision, emphasizing the autonomy of each drone in achieving coordinated flight and obstacle avoidance. Our methodological approach and experimental results validate the effectiveness of the proposed vision-based end-to-end controller, paving the way for more sophisticated applications of multi-UAV systems in intricate, real-world scenarios.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"52 8","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139003716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hanpeng Li, Kai Mao, Xuchao Ye, Taotao Zhang, Qiuming Zhu, Manxi Wang, Yurao Ge, Hangang Li, Farman Ali
Unmanned aerial vehicles (UAVs) have found expanding utilization in smart agriculture. Path loss (PL) is of significant importance in the link budget of UAV-aided air-to-ground (A2G) communications. This paper proposes a machine-learning-based PL model for A2G communication in agricultural scenarios. On this basis, a double-weight neurons-based artificial neural network (DWN-ANN) is proposed, which can strike a fine equilibrium between the amount of measurement data and the accuracy of predictions by using ray tracing (RT) simulation data for pre-training and measurement data for optimization training. Moreover, an RT pre-correction module is introduced into the DWN-ANN to optimize the impact of varying farmland materials on the accuracy of RT simulation, thereby improving the accuracy of RT simulation data. Finally, channel measurement campaigns are carried out over a farmland area at 3.6 GHz, and the measurement data are used for the training and validation of the proposed DWN-ANN. The prediction results of the proposed PL model demonstrate a fine concordance with the measurement data and are better than the traditional empirical models.
{"title":"Air-to-Ground Path Loss Model at 3.6 GHz under Agricultural Scenarios Based on Measurements and Artificial Neural Networks","authors":"Hanpeng Li, Kai Mao, Xuchao Ye, Taotao Zhang, Qiuming Zhu, Manxi Wang, Yurao Ge, Hangang Li, Farman Ali","doi":"10.3390/drones7120701","DOIUrl":"https://doi.org/10.3390/drones7120701","url":null,"abstract":"Unmanned aerial vehicles (UAVs) have found expanding utilization in smart agriculture. Path loss (PL) is of significant importance in the link budget of UAV-aided air-to-ground (A2G) communications. This paper proposes a machine-learning-based PL model for A2G communication in agricultural scenarios. On this basis, a double-weight neurons-based artificial neural network (DWN-ANN) is proposed, which can strike a fine equilibrium between the amount of measurement data and the accuracy of predictions by using ray tracing (RT) simulation data for pre-training and measurement data for optimization training. Moreover, an RT pre-correction module is introduced into the DWN-ANN to optimize the impact of varying farmland materials on the accuracy of RT simulation, thereby improving the accuracy of RT simulation data. Finally, channel measurement campaigns are carried out over a farmland area at 3.6 GHz, and the measurement data are used for the training and validation of the proposed DWN-ANN. The prediction results of the proposed PL model demonstrate a fine concordance with the measurement data and are better than the traditional empirical models.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"128 3","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138981601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huarui Xv, Lei Zhao, Mingjian Wu, Kun Liu, Hongyue Zhang, Zhilin Wu
Ducted UAVs have attracted much attention because the duct structure can reduce the propeller tip vortices and thus increase the effective lift area of the lower propeller. This paper investigates the effects of parameters on the aerodynamic characteristics of ducted UAVs, such as co-axial twin propeller configuration and duct structure. The aerodynamic characteristics of the UAV were analyzed using CFD methods, while the impact sensitivity analysis of the simulation data was sorted using the orthogonal test method. The results indicate that, while maintaining overall strength, increasing the propeller spacing by about 0.055 times the duct chord length can increase the lift of the upper propeller by approximately 1.3% faster. Reducing the distance between the propeller and the top surface of the duct by about 0.5 times the duct chord length can increase the lift of the lower propeller by approximately 7.7%. Increasing the chord length of the duct cross-section by about 35.3% can simultaneously make the structure of the duct and the total lift of the drone faster by approximately 150.6% and 15.7%, respectively. This research provides valuable guidance and reference for the subsequent overall design of ducted UAVs.
{"title":"Analysis of the Impact of Structural Parameter Changes on the Overall Aerodynamic Characteristics of Ducted UAVs","authors":"Huarui Xv, Lei Zhao, Mingjian Wu, Kun Liu, Hongyue Zhang, Zhilin Wu","doi":"10.3390/drones7120702","DOIUrl":"https://doi.org/10.3390/drones7120702","url":null,"abstract":"Ducted UAVs have attracted much attention because the duct structure can reduce the propeller tip vortices and thus increase the effective lift area of the lower propeller. This paper investigates the effects of parameters on the aerodynamic characteristics of ducted UAVs, such as co-axial twin propeller configuration and duct structure. The aerodynamic characteristics of the UAV were analyzed using CFD methods, while the impact sensitivity analysis of the simulation data was sorted using the orthogonal test method. The results indicate that, while maintaining overall strength, increasing the propeller spacing by about 0.055 times the duct chord length can increase the lift of the upper propeller by approximately 1.3% faster. Reducing the distance between the propeller and the top surface of the duct by about 0.5 times the duct chord length can increase the lift of the lower propeller by approximately 7.7%. Increasing the chord length of the duct cross-section by about 35.3% can simultaneously make the structure of the duct and the total lift of the drone faster by approximately 150.6% and 15.7%, respectively. This research provides valuable guidance and reference for the subsequent overall design of ducted UAVs.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"18 4","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138979053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, deep learning methods and multisensory fusion have been applied to address odometry challenges in unmanned ground vehicles (UGVs). In this paper, we propose an end-to-end visual-lidar-inertial odometry framework to enhance the accuracy of pose estimation. Grayscale images, 3D point clouds, and inertial data are used as inputs to overcome the limitations of a single sensor. Convolutional neural network (CNN) and recurrent neural network (RNN) are employed as encoders for different sensor modalities. In contrast to previous multisensory odometry methods, our framework introduces a novel attention-based fusion module that remaps feature vectors to adapt to various scenes. Evaluations on the Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI) odometry benchmark demonstrate the effectiveness of our framework.
{"title":"An Attention-Based Odometry Framework for Multisensory Unmanned Ground Vehicles (UGVs)","authors":"Zhiyao Xiao, Guobao Zhang","doi":"10.3390/drones7120699","DOIUrl":"https://doi.org/10.3390/drones7120699","url":null,"abstract":"Recently, deep learning methods and multisensory fusion have been applied to address odometry challenges in unmanned ground vehicles (UGVs). In this paper, we propose an end-to-end visual-lidar-inertial odometry framework to enhance the accuracy of pose estimation. Grayscale images, 3D point clouds, and inertial data are used as inputs to overcome the limitations of a single sensor. Convolutional neural network (CNN) and recurrent neural network (RNN) are employed as encoders for different sensor modalities. In contrast to previous multisensory odometry methods, our framework introduces a novel attention-based fusion module that remaps feature vectors to adapt to various scenes. Evaluations on the Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI) odometry benchmark demonstrate the effectiveness of our framework.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"565 ","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138983155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a fixed-time extended state observer-based adaptive sliding mode controller evaluated in a quadrotor unmanned aerial vehicle subject to severe turbulent wind while executing a desired trajectory. Since both the state and model of the system are assumed to be partially known, the observer, whose convergence is independent from the initial states of the system, estimates the full state, model uncertainties, and the effects of turbulent wind in fixed time. Such information is then compensated via feedback control conducted by a class of adaptive sliding mode controller, which is robust to perturbations and reduces the chattering effect by non-overestimating its adaptive gain. Furthermore, the stability of the closed-loop system is analyzed by means of the Lyapunov theory. Finally, simulation results validate the feasibility and advantages of the proposed strategy, where the observer enhances performance. For further demonstration, a comparison with an existent approach is provided.
{"title":"Fixed-Time Extended Observer-Based Adaptive Sliding Mode Control for a Quadrotor UAV under Severe Turbulent Wind","authors":"Armando Miranda-Moya, H. Castañeda, Hesheng Wang","doi":"10.3390/drones7120700","DOIUrl":"https://doi.org/10.3390/drones7120700","url":null,"abstract":"This paper presents a fixed-time extended state observer-based adaptive sliding mode controller evaluated in a quadrotor unmanned aerial vehicle subject to severe turbulent wind while executing a desired trajectory. Since both the state and model of the system are assumed to be partially known, the observer, whose convergence is independent from the initial states of the system, estimates the full state, model uncertainties, and the effects of turbulent wind in fixed time. Such information is then compensated via feedback control conducted by a class of adaptive sliding mode controller, which is robust to perturbations and reduces the chattering effect by non-overestimating its adaptive gain. Furthermore, the stability of the closed-loop system is analyzed by means of the Lyapunov theory. Finally, simulation results validate the feasibility and advantages of the proposed strategy, where the observer enhances performance. For further demonstration, a comparison with an existent approach is provided.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"321 2","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138983313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Light small-sized, multi-rotor UAVs, with their notable advantages of portability, intelligence, and low cost, occupy a significant share in the civilian UAV market. To further reduce the full lifecycle cost of products, shorten development cycles, and increase market share, some manufacturers of these UAVs have adopted a series development strategy based on the concept of commonality in design. However, there is currently a lack of effective methods to quantify the commonality in UAV designs, which is key to guiding commonality design. In view of this, our study innovatively proposes a new UAV commonality evaluation model based on the basic composition of light small-sized multi-rotor UAVs and the theory of design structure matrices. Through cross-evaluations of four models, the model has been confirmed to comprehensively quantify the degree of commonality between models. To achieve commonality prediction in the early stages of multi-rotor UAV design, we constructed a commonality prediction dataset centered around the commonality evaluation model using data from typical light small-sized multi-rotor UAV models. After training this dataset with convolutional neural networks, we successfully developed an effective predictive model for the commonality of new light small-sized multi-rotor UAV models and verified the feasibility and effectiveness of this method through a case application in UAV design. The commonality evaluation and prediction models established in this study not only provide strong decision-making support for the series design and commonality design of UAV products but also offer new perspectives and tools for strategic development in this field.
{"title":"Commonality Evaluation and Prediction Study of Light and Small Multi-Rotor UAVs","authors":"Yongjie Zhang, Yongqi Zeng, K. Cao","doi":"10.3390/drones7120698","DOIUrl":"https://doi.org/10.3390/drones7120698","url":null,"abstract":"Light small-sized, multi-rotor UAVs, with their notable advantages of portability, intelligence, and low cost, occupy a significant share in the civilian UAV market. To further reduce the full lifecycle cost of products, shorten development cycles, and increase market share, some manufacturers of these UAVs have adopted a series development strategy based on the concept of commonality in design. However, there is currently a lack of effective methods to quantify the commonality in UAV designs, which is key to guiding commonality design. In view of this, our study innovatively proposes a new UAV commonality evaluation model based on the basic composition of light small-sized multi-rotor UAVs and the theory of design structure matrices. Through cross-evaluations of four models, the model has been confirmed to comprehensively quantify the degree of commonality between models. To achieve commonality prediction in the early stages of multi-rotor UAV design, we constructed a commonality prediction dataset centered around the commonality evaluation model using data from typical light small-sized multi-rotor UAV models. After training this dataset with convolutional neural networks, we successfully developed an effective predictive model for the commonality of new light small-sized multi-rotor UAV models and verified the feasibility and effectiveness of this method through a case application in UAV design. The commonality evaluation and prediction models established in this study not only provide strong decision-making support for the series design and commonality design of UAV products but also offer new perspectives and tools for strategic development in this field.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"235 ","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139011309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhun Zhang, Qihe Liu, Chunjiang Wu, Shijie Zhou, Zhangbao Yan
With the rapid advancement of unmanned aerial vehicles (UAVs) and the Internet of Things (IoTs), UAV-assisted IoTs has become integral in areas such as wildlife monitoring, disaster surveillance, and search and rescue operations. However, recent studies have shown that these systems are vulnerable to adversarial example attacks during data collection and transmission. These attacks subtly alter input data to trick UAV-based deep learning vision systems, significantly compromising the reliability and security of IoTs systems. Consequently, various methods have been developed to identify adversarial examples within model inputs, but they often lack accuracy against complex attacks like C&W and others. Drawing inspiration from model visualization technology, we observed that adversarial perturbations markedly alter the attribution maps of clean examples. This paper introduces a new, effective detection method for UAV vision systems that uses attribution maps created by model visualization techniques. The method differentiates between genuine and adversarial examples by extracting their unique attribution maps and then training a classifier on these maps. Validation experiments on the ImageNet dataset showed that our method achieves an average detection accuracy of 99.58%, surpassing the state-of-the-art methods.
{"title":"A Novel Adversarial Detection Method for UAV Vision Systems via Attribution Maps","authors":"Zhun Zhang, Qihe Liu, Chunjiang Wu, Shijie Zhou, Zhangbao Yan","doi":"10.3390/drones7120697","DOIUrl":"https://doi.org/10.3390/drones7120697","url":null,"abstract":"With the rapid advancement of unmanned aerial vehicles (UAVs) and the Internet of Things (IoTs), UAV-assisted IoTs has become integral in areas such as wildlife monitoring, disaster surveillance, and search and rescue operations. However, recent studies have shown that these systems are vulnerable to adversarial example attacks during data collection and transmission. These attacks subtly alter input data to trick UAV-based deep learning vision systems, significantly compromising the reliability and security of IoTs systems. Consequently, various methods have been developed to identify adversarial examples within model inputs, but they often lack accuracy against complex attacks like C&W and others. Drawing inspiration from model visualization technology, we observed that adversarial perturbations markedly alter the attribution maps of clean examples. This paper introduces a new, effective detection method for UAV vision systems that uses attribution maps created by model visualization techniques. The method differentiates between genuine and adversarial examples by extracting their unique attribution maps and then training a classifier on these maps. Validation experiments on the ImageNet dataset showed that our method achieves an average detection accuracy of 99.58%, surpassing the state-of-the-art methods.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"52 49","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138592999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}