使用深度卷积神经网络对无人机进行分类的新方法

Drones Pub Date : 2024-07-12 DOI:10.3390/drones8070319
Hrishi Rakshit, Pooneh Bagheri Zadeh
{"title":"使用深度卷积神经网络对无人机进行分类的新方法","authors":"Hrishi Rakshit, Pooneh Bagheri Zadeh","doi":"10.3390/drones8070319","DOIUrl":null,"url":null,"abstract":"In recent years, the widespread adaptation of Unmanned Aerial Vehicles (UAVs), commonly known as drones, among the public has led to significant security concerns, prompting intense research into drones’ classification methodologies. The swift and accurate classification of drones poses a considerable challenge due to their diminutive size and rapid movements. To address this challenge, this paper introduces (i) a novel drone classification approach utilizing deep convolution and deep transfer learning techniques. The model incorporates bypass connections and Leaky ReLU activation functions to mitigate the ‘vanishing gradient problem’ and the ‘dying ReLU problem’, respectively, associated with deep networks and is trained on a diverse dataset. This study employs (ii) a custom dataset comprising both audio and visual data of drones as well as analogous objects like an airplane, birds, a helicopter, etc., to enhance classification accuracy. The integration of audio–visual information facilitates more precise drone classification. Furthermore, (iii) a new Finite Impulse Response (FIR) low-pass filter is proposed to convert audio signals into spectrogram images, reducing susceptibility to noise and interference. The proposed model signifies a transformative advancement in convolutional neural networks’ design, illustrating the compatibility of efficacy and efficiency without compromising on complexity and learnable properties. A notable performance was demonstrated by the proposed model, with an accuracy of 100% achieved on the test images using only four million learnable parameters. In contrast, the Resnet50 and Inception-V3 models exhibit 90% accuracy each on the same test set, despite the employment of 23.50 million and 21.80 million learnable parameters, respectively.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A New Approach to Classify Drones Using a Deep Convolutional Neural Network\",\"authors\":\"Hrishi Rakshit, Pooneh Bagheri Zadeh\",\"doi\":\"10.3390/drones8070319\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, the widespread adaptation of Unmanned Aerial Vehicles (UAVs), commonly known as drones, among the public has led to significant security concerns, prompting intense research into drones’ classification methodologies. The swift and accurate classification of drones poses a considerable challenge due to their diminutive size and rapid movements. To address this challenge, this paper introduces (i) a novel drone classification approach utilizing deep convolution and deep transfer learning techniques. The model incorporates bypass connections and Leaky ReLU activation functions to mitigate the ‘vanishing gradient problem’ and the ‘dying ReLU problem’, respectively, associated with deep networks and is trained on a diverse dataset. This study employs (ii) a custom dataset comprising both audio and visual data of drones as well as analogous objects like an airplane, birds, a helicopter, etc., to enhance classification accuracy. The integration of audio–visual information facilitates more precise drone classification. Furthermore, (iii) a new Finite Impulse Response (FIR) low-pass filter is proposed to convert audio signals into spectrogram images, reducing susceptibility to noise and interference. The proposed model signifies a transformative advancement in convolutional neural networks’ design, illustrating the compatibility of efficacy and efficiency without compromising on complexity and learnable properties. A notable performance was demonstrated by the proposed model, with an accuracy of 100% achieved on the test images using only four million learnable parameters. In contrast, the Resnet50 and Inception-V3 models exhibit 90% accuracy each on the same test set, despite the employment of 23.50 million and 21.80 million learnable parameters, respectively.\",\"PeriodicalId\":507567,\"journal\":{\"name\":\"Drones\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Drones\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3390/drones8070319\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Drones","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/drones8070319","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

近年来,无人驾驶飞行器(UAV)(俗称 "无人机")在公众中的广泛应用引发了巨大的安全问题,促使人们对无人机的分类方法进行了深入研究。由于无人机体积小、移动速度快,对其进行快速准确的分类是一项相当大的挑战。为应对这一挑战,本文介绍了 (i) 一种利用深度卷积和深度迁移学习技术的新型无人机分类方法。该模型结合了旁路连接和 Leaky ReLU 激活函数,分别缓解了与深度网络相关的 "梯度消失问题 "和 "ReLU 垂死问题",并在一个多样化的数据集上进行了训练。本研究采用了(ii)自定义数据集,其中包括无人机以及飞机、鸟类、直升机等类似物体的视听数据,以提高分类准确性。视听信息的整合有助于更精确地进行无人机分类。此外,(iii) 还提出了一种新的有限脉冲响应(FIR)低通滤波器,用于将音频信号转换为频谱图图像,从而降低对噪声和干扰的敏感性。所提出的模型标志着卷积神经网络设计的变革性进步,说明了在不影响复杂性和可学习特性的前提下,功效和效率的兼容性。所提出的模型具有显著的性能,仅使用四百万个可学习参数,测试图像的准确率就达到了 100%。相比之下,尽管 Resnet50 和 Inception-V3 模型分别使用了 2350 万和 2180 万个可学习参数,但在相同的测试集上,它们的准确率均为 90%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A New Approach to Classify Drones Using a Deep Convolutional Neural Network
In recent years, the widespread adaptation of Unmanned Aerial Vehicles (UAVs), commonly known as drones, among the public has led to significant security concerns, prompting intense research into drones’ classification methodologies. The swift and accurate classification of drones poses a considerable challenge due to their diminutive size and rapid movements. To address this challenge, this paper introduces (i) a novel drone classification approach utilizing deep convolution and deep transfer learning techniques. The model incorporates bypass connections and Leaky ReLU activation functions to mitigate the ‘vanishing gradient problem’ and the ‘dying ReLU problem’, respectively, associated with deep networks and is trained on a diverse dataset. This study employs (ii) a custom dataset comprising both audio and visual data of drones as well as analogous objects like an airplane, birds, a helicopter, etc., to enhance classification accuracy. The integration of audio–visual information facilitates more precise drone classification. Furthermore, (iii) a new Finite Impulse Response (FIR) low-pass filter is proposed to convert audio signals into spectrogram images, reducing susceptibility to noise and interference. The proposed model signifies a transformative advancement in convolutional neural networks’ design, illustrating the compatibility of efficacy and efficiency without compromising on complexity and learnable properties. A notable performance was demonstrated by the proposed model, with an accuracy of 100% achieved on the test images using only four million learnable parameters. In contrast, the Resnet50 and Inception-V3 models exhibit 90% accuracy each on the same test set, despite the employment of 23.50 million and 21.80 million learnable parameters, respectively.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Improved Nonlinear Model Predictive Control Based Fast Trajectory Tracking for a Quadrotor Unmanned Aerial Vehicle A General Method for Pre-Flight Preparation in Data Collection for Unmanned Aerial Vehicle-Based Bridge Inspection A Mission Planning Method for Long-Endurance Unmanned Aerial Vehicles: Integrating Heterogeneous Ground Control Resource Allocation Equivalent Spatial Plane-Based Relative Pose Estimation of UAVs Multi-Type Task Assignment Algorithm for Heterogeneous UAV Cluster Based on Improved NSGA-Ⅱ
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1