水下航行器目标检测的多分支扩展卷积中心网

IF 0.9 4区 工程技术 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Journal of Circuits Systems and Computers Pub Date : 2023-10-11 DOI:10.1142/s0218126624501019
Chen Liang, Mingliang Zhou, Fuqiang Liu, Yi Qin
{"title":"水下航行器目标检测的多分支扩展卷积中心网","authors":"Chen Liang, Mingliang Zhou, Fuqiang Liu, Yi Qin","doi":"10.1142/s0218126624501019","DOIUrl":null,"url":null,"abstract":"Object detection occupies a very important position in the fishing operation and autonomous navigation of underwater vehicles. At present, most deep-learning object detection approaches, such as R-CNN, SPPNet, R-FCN, etc., have two stages and are based on anchors. However, the previous methods generally have the problems of weak generalization ability and not high enough computational efficiency due to the generation of anchors. As a well-known one-stage anchor-free method, CenterNet can accelerate the inference speed by omitting the step of generating anchors, whereas it is difficult to extract sufficient global information because of the residual structure at the bottom layer, which leads to low detection precision for the overlapping targets. Dilation convolution makes the kernel obtain a larger reception field and access more information. Multi-branch structure can not only preserve the whole area information, but also efficiently separate foreground and background. By combining the dilation convolution and multi-branch structure, multi-branch dilation convolution is proposed and applied to the Hourglass backbone network in CenterNet, then an improved CenterNet named multi-branch dilation convolution CenterNet (MDC-CenterNet) is built, which has a stronger ability of object detection. The proposed method is successfully utilized for detection of underwater organisms including holothurian, scallop, echinus and starfish, and the comparison result shows that it outperforms the original CenterNet and the classical object detection network. Moreover, with the MS-COCO and PASCAL VOC datasets, a number of comparative experiments are performed for showing the advancement of our method compared to other best methods.","PeriodicalId":54866,"journal":{"name":"Journal of Circuits Systems and Computers","volume":"25 1","pages":"0"},"PeriodicalIF":0.9000,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-Branch Dilation Convolution CenterNet for Object Detection of Underwater Vehicles\",\"authors\":\"Chen Liang, Mingliang Zhou, Fuqiang Liu, Yi Qin\",\"doi\":\"10.1142/s0218126624501019\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Object detection occupies a very important position in the fishing operation and autonomous navigation of underwater vehicles. At present, most deep-learning object detection approaches, such as R-CNN, SPPNet, R-FCN, etc., have two stages and are based on anchors. However, the previous methods generally have the problems of weak generalization ability and not high enough computational efficiency due to the generation of anchors. As a well-known one-stage anchor-free method, CenterNet can accelerate the inference speed by omitting the step of generating anchors, whereas it is difficult to extract sufficient global information because of the residual structure at the bottom layer, which leads to low detection precision for the overlapping targets. Dilation convolution makes the kernel obtain a larger reception field and access more information. Multi-branch structure can not only preserve the whole area information, but also efficiently separate foreground and background. By combining the dilation convolution and multi-branch structure, multi-branch dilation convolution is proposed and applied to the Hourglass backbone network in CenterNet, then an improved CenterNet named multi-branch dilation convolution CenterNet (MDC-CenterNet) is built, which has a stronger ability of object detection. The proposed method is successfully utilized for detection of underwater organisms including holothurian, scallop, echinus and starfish, and the comparison result shows that it outperforms the original CenterNet and the classical object detection network. Moreover, with the MS-COCO and PASCAL VOC datasets, a number of comparative experiments are performed for showing the advancement of our method compared to other best methods.\",\"PeriodicalId\":54866,\"journal\":{\"name\":\"Journal of Circuits Systems and Computers\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.9000,\"publicationDate\":\"2023-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Circuits Systems and Computers\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1142/s0218126624501019\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Circuits Systems and Computers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/s0218126624501019","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

目标检测在水下航行器的捕捞作业和自主导航中占有非常重要的地位。目前,大多数深度学习对象检测方法,如R-CNN、SPPNet、R-FCN等,都有两个阶段,并且是基于锚点的。但是,以往的方法由于锚点的产生,普遍存在泛化能力弱、计算效率不够高的问题。作为一种著名的单阶段无锚点方法,CenterNet省略了生成锚点的步骤,加快了推理速度,但由于底层存在残余结构,难以提取足够的全局信息,导致重叠目标的检测精度较低。展开卷积使核获得更大的接收场,获取更多的信息。多分支结构既能保留整个区域的信息,又能有效地分离前景和背景。将扩展卷积与多分支结构相结合,提出了多分支扩展卷积,并将其应用于CenterNet中的沙漏骨干网,构建了改进的多分支扩展卷积中心网(MDC-CenterNet),该中心网具有更强的目标检测能力。将该方法成功应用于对海参、扇贝、海星等水下生物的检测,对比结果表明,该方法优于原有的CenterNet和经典的目标检测网络。此外,利用MS-COCO和PASCAL VOC数据集,进行了许多对比实验,以显示我们的方法与其他最佳方法相比的先进性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Multi-Branch Dilation Convolution CenterNet for Object Detection of Underwater Vehicles
Object detection occupies a very important position in the fishing operation and autonomous navigation of underwater vehicles. At present, most deep-learning object detection approaches, such as R-CNN, SPPNet, R-FCN, etc., have two stages and are based on anchors. However, the previous methods generally have the problems of weak generalization ability and not high enough computational efficiency due to the generation of anchors. As a well-known one-stage anchor-free method, CenterNet can accelerate the inference speed by omitting the step of generating anchors, whereas it is difficult to extract sufficient global information because of the residual structure at the bottom layer, which leads to low detection precision for the overlapping targets. Dilation convolution makes the kernel obtain a larger reception field and access more information. Multi-branch structure can not only preserve the whole area information, but also efficiently separate foreground and background. By combining the dilation convolution and multi-branch structure, multi-branch dilation convolution is proposed and applied to the Hourglass backbone network in CenterNet, then an improved CenterNet named multi-branch dilation convolution CenterNet (MDC-CenterNet) is built, which has a stronger ability of object detection. The proposed method is successfully utilized for detection of underwater organisms including holothurian, scallop, echinus and starfish, and the comparison result shows that it outperforms the original CenterNet and the classical object detection network. Moreover, with the MS-COCO and PASCAL VOC datasets, a number of comparative experiments are performed for showing the advancement of our method compared to other best methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Circuits Systems and Computers
Journal of Circuits Systems and Computers 工程技术-工程:电子与电气
CiteScore
2.80
自引率
26.70%
发文量
350
审稿时长
5.4 months
期刊介绍: Journal of Circuits, Systems, and Computers covers a wide scope, ranging from mathematical foundations to practical engineering design in the general areas of circuits, systems, and computers with focus on their circuit aspects. Although primary emphasis will be on research papers, survey, expository and tutorial papers are also welcome. The journal consists of two sections: Papers - Contributions in this section may be of a research or tutorial nature. Research papers must be original and must not duplicate descriptions or derivations available elsewhere. The author should limit paper length whenever this can be done without impairing quality. Letters - This section provides a vehicle for speedy publication of new results and information of current interest in circuits, systems, and computers. Focus will be directed to practical design- and applications-oriented contributions, but publication in this section will not be restricted to this material. These letters are to concentrate on reporting the results obtained, their significance and the conclusions, while including only the minimum of supporting details required to understand the contribution. Publication of a manuscript in this manner does not preclude a later publication with a fully developed version.
期刊最新文献
An Intelligent Apple Identification Method via the Collaboration of YOLOv5 Algorithm and Fast-Guided Filter Theory Careful Seeding for k-Medois Clustering with Incremental k-Means++ Initialization Analysis and Simulation of Current Balancer Circuit for Phase-Gain Correction of Unbalanced Differential Signals SPC-Indexed Indirect Branch Hardware Cache Redirecting Technique in Binary Translation Image Classification Method Based on Multi-Scale Convolutional Neural Network
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1