关键子区域特征融合网络用于遥感图像的细粒度船舶检测与识别

Q3 Computer Science 中国图象图形学报 Pub Date : 2023-01-01 DOI:10.11834/jig.220671
Zhang Lei, Chen Wen, Wang Yuehuan
{"title":"关键子区域特征融合网络用于遥感图像的细粒度船舶检测与识别","authors":"Zhang Lei, Chen Wen, Wang Yuehuan","doi":"10.11834/jig.220671","DOIUrl":null,"url":null,"abstract":"目的 遥感图像中的舰船目标细粒度检测与识别在港口海域监视以及情报搜集等应用中有很高的实际应用价值,但遥感图像中不同种类的舰船目标整体颜色、形状与纹理特征相近,分辨力不足,导致舰船细粒度识别困难。针对该问题,提出了一种端到端的基于关键子区域特征的舰船细粒度检测与识别方法。方法 为了获得更适于目标细粒度识别的特征,提出多层次特征融合识别网络,按照整体、局部子区域两个层次从检测网络得到的候选目标区域中提取特征。然后结合候选目标中所有子区域的信息计算每个子区域的判别性显著度,对含有判别性组件的关键子区域进行挖掘。最后基于判别性显著度将子区域特征与整体特征进行自适应融合,形成表征能力更强的特征,对舰船目标进行细粒度识别。整个检测与识别网络采用端到端一体化设计,所有候选目标特征提取过程只需要经过一次骨干网络的计算,提高了计算效率。结果 在公开的带有细粒度类别标签的 HRSC2016(high resolu-tion ship collection)数据集 L3 任务上,本文方法平均准确率为 77.3%,相较于不采用多层次特征融合识别网络提升了 6.3%;在自建的包含 45 类舰船目标的 FGSAID(fine-grained ships in aerial images dataset)数据集上,本文方法平均准确率为 71.5%。结论 本文方法有效挖掘并融合了含有判别性组件的子区域的特征,解决了目标整体特征分辨力不足导致的细粒度目标识别困难问题,相较于现有的遥感图像舰船目标检测与识别算法准确性有明显提升。;Objective The ocean has great economic and military value.The development of human society increases the impact of ocean activities on the development of a country.The sea is an important carrier of marine activities.Thus, the recognition and monitoring of ship targets in key sea areas through remote sensing images are crucial to the national defense and development of the economy.Fine-grained ship detection and recognition in high-resolution remote sensing images refer to the identification of specific types of ships based on ship detection.A precise and detailed classification is valuable in practical application fields, such as sea surveillance and intelligence gathering.Instead of coarse-grained classification categories, such as warcraft and merchant ships, specific ship types, such as Arleigh Burke-class destroyer, Nimitz-class aircraft carrier, container, and car carrier, are necessary.However, the overall color, shape, and texture of different types of ship targets are similar.The structures of ships belong to different types, but their uses are similar.Moreover, the coating color of military ships is monotonous.These characteristics complicate the classification of these targets.The existing ship detectors are designed to focus on locating targets.The design of the classification branch of these detectors is relatively simple.They only use the features of whole targets for classification, significantly decreasing the performance in the fine-grained labeled datasets.The existing ship classification methods, which mainly classify targets on the pre-cropped image patches, are separated from the detection process.This approach is unsatisfactory for practical applications for two reasons:1)the whole backbone of these methods based on neural networks must be executed on every proposal to extract features.The remote sensing images of the harbor usually include several ships;thus, the computation cost increases sharply.2)The detection and classification networks are optimized separately, and the parameters of both networks are optimized to the best.The whole process cannot obtain the optimal solution because the locations of proposals obtained by detection methods vary with the pre-cropped image patches.utilize prior knowledge of ships and propose the key sub-region feature fusion network(KSFFN), which fuses features of sub-regions that are discriminative to the whole feature and combines detection and fine-grained recognition into one framework.Method KSFFN uses ResNet-50 as the backbone network to extract features and construct a proposal locating network by combining Faster R-CNN with region of interest(ROI) Transformer for obtaining proposal locations.Then, all of the proposals are ranked according to the probability of targets.The proposals with low probability are filtered.Then, the multi-level feature fusion recognition network(MLFFRN)is proposed to extract features from the proposals generated by the proposal locating network and to classify the proposals.First, the proposals are separated into several subregions along the axis and the overall features and sub-region features are extracted from different levels of the feature pyramid.Then, the self-supervision mechanism in the navigator-teacherscrutinizer network(NTS-Net)finds the key subregion that may contain important parts contributing to fine-grained recognition.Due to the limitation of image quality and characteristics of the target, not all targets have a very discriminating subregion.Moreover, the self-supervision mechanism in NTS-Net cannot reflect this subregion.Therefore, the information from all subregions in the proposal is utilized to calculate the discriminant significance of the subregion, which reflects the influence of the subregion on target recognition.Based on the discriminant significance, the weight of the sub-region is calculated, and the key sub-region features are fused with the overall features according to the weight.The combined feature is used to obtain the final classification result, thereby improving the accuracy of fine-grained recognition of ship targets.Result Public high resolution ship collection 2016(HRSC2016)dataset L3 task and self-built fine-grained ships in aerial images dataset(FGSAID)are used to evaluate the model.HRSC2016 dataset contains 1 061 images with 2 886 ships divided into 19 types.FGSAID dataset contains 1 690 images with 5 410 ships divided into 45 types.The average precision(AP)is used as an evaluation metric, and the intersection over union is set as 0.5 to determine whether the prediction box matches the ground truth.On the HRSC2016 dataset L3 task, the proposed method achieves an AP of 77.3%, and MLFFRN can improve the AP by 6.3%.On the FGSAID dataset, our method achieves an AP of 71.5%.A series of ablation experiments is conducted on the HRSC2016 dataset L3 task to show the effectiveness of different parts of the proposed method.In addition, the proposed method is compared with the state-of-the-art deep-learning-based ship detection framework on two datasets.The experiment results show that our model outperforms all other methods on both datasets.Compared with single-shot alignment network(S2ANet)network, the proposed method increases the AP by 7.8% and 8.9% on HRSC2016 and FGSAID, respectively.In particular, the AP of the proposed method increases by 16.7%, 11.1%, and 1.1% for the aircraft carrier/amphibious assault ships, other warships, and merchant ships, respectively, in the FGSAID dataset.Conclusion In this study, the end-to-end fine-grained ship detection and recognition network KSFFN is proposed.It extracts the overall features and sub-region features of the proposals and fuses them according to the discriminant significance.The proposed method combines detection and fine-grained recognition into one framework, thereby improving the processing speed greatly while performing excellently.Thus, KSFFN has great application value.The proposed method has a more powerful classification framework and can achieve more accurate results than the existing detection method.The experiment results show that our method outperforms several state-of-the-art deep-learning-based ship detection frameworks, thereby proving the effectiveness of KSFFN.","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"74 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Key sub-region feature fusion network for fine-grained ship detection and recognition in remote sensing images\",\"authors\":\"Zhang Lei, Chen Wen, Wang Yuehuan\",\"doi\":\"10.11834/jig.220671\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"目的 遥感图像中的舰船目标细粒度检测与识别在港口海域监视以及情报搜集等应用中有很高的实际应用价值,但遥感图像中不同种类的舰船目标整体颜色、形状与纹理特征相近,分辨力不足,导致舰船细粒度识别困难。针对该问题,提出了一种端到端的基于关键子区域特征的舰船细粒度检测与识别方法。方法 为了获得更适于目标细粒度识别的特征,提出多层次特征融合识别网络,按照整体、局部子区域两个层次从检测网络得到的候选目标区域中提取特征。然后结合候选目标中所有子区域的信息计算每个子区域的判别性显著度,对含有判别性组件的关键子区域进行挖掘。最后基于判别性显著度将子区域特征与整体特征进行自适应融合,形成表征能力更强的特征,对舰船目标进行细粒度识别。整个检测与识别网络采用端到端一体化设计,所有候选目标特征提取过程只需要经过一次骨干网络的计算,提高了计算效率。结果 在公开的带有细粒度类别标签的 HRSC2016(high resolu-tion ship collection)数据集 L3 任务上,本文方法平均准确率为 77.3%,相较于不采用多层次特征融合识别网络提升了 6.3%;在自建的包含 45 类舰船目标的 FGSAID(fine-grained ships in aerial images dataset)数据集上,本文方法平均准确率为 71.5%。结论 本文方法有效挖掘并融合了含有判别性组件的子区域的特征,解决了目标整体特征分辨力不足导致的细粒度目标识别困难问题,相较于现有的遥感图像舰船目标检测与识别算法准确性有明显提升。;Objective The ocean has great economic and military value.The development of human society increases the impact of ocean activities on the development of a country.The sea is an important carrier of marine activities.Thus, the recognition and monitoring of ship targets in key sea areas through remote sensing images are crucial to the national defense and development of the economy.Fine-grained ship detection and recognition in high-resolution remote sensing images refer to the identification of specific types of ships based on ship detection.A precise and detailed classification is valuable in practical application fields, such as sea surveillance and intelligence gathering.Instead of coarse-grained classification categories, such as warcraft and merchant ships, specific ship types, such as Arleigh Burke-class destroyer, Nimitz-class aircraft carrier, container, and car carrier, are necessary.However, the overall color, shape, and texture of different types of ship targets are similar.The structures of ships belong to different types, but their uses are similar.Moreover, the coating color of military ships is monotonous.These characteristics complicate the classification of these targets.The existing ship detectors are designed to focus on locating targets.The design of the classification branch of these detectors is relatively simple.They only use the features of whole targets for classification, significantly decreasing the performance in the fine-grained labeled datasets.The existing ship classification methods, which mainly classify targets on the pre-cropped image patches, are separated from the detection process.This approach is unsatisfactory for practical applications for two reasons:1)the whole backbone of these methods based on neural networks must be executed on every proposal to extract features.The remote sensing images of the harbor usually include several ships;thus, the computation cost increases sharply.2)The detection and classification networks are optimized separately, and the parameters of both networks are optimized to the best.The whole process cannot obtain the optimal solution because the locations of proposals obtained by detection methods vary with the pre-cropped image patches.utilize prior knowledge of ships and propose the key sub-region feature fusion network(KSFFN), which fuses features of sub-regions that are discriminative to the whole feature and combines detection and fine-grained recognition into one framework.Method KSFFN uses ResNet-50 as the backbone network to extract features and construct a proposal locating network by combining Faster R-CNN with region of interest(ROI) Transformer for obtaining proposal locations.Then, all of the proposals are ranked according to the probability of targets.The proposals with low probability are filtered.Then, the multi-level feature fusion recognition network(MLFFRN)is proposed to extract features from the proposals generated by the proposal locating network and to classify the proposals.First, the proposals are separated into several subregions along the axis and the overall features and sub-region features are extracted from different levels of the feature pyramid.Then, the self-supervision mechanism in the navigator-teacherscrutinizer network(NTS-Net)finds the key subregion that may contain important parts contributing to fine-grained recognition.Due to the limitation of image quality and characteristics of the target, not all targets have a very discriminating subregion.Moreover, the self-supervision mechanism in NTS-Net cannot reflect this subregion.Therefore, the information from all subregions in the proposal is utilized to calculate the discriminant significance of the subregion, which reflects the influence of the subregion on target recognition.Based on the discriminant significance, the weight of the sub-region is calculated, and the key sub-region features are fused with the overall features according to the weight.The combined feature is used to obtain the final classification result, thereby improving the accuracy of fine-grained recognition of ship targets.Result Public high resolution ship collection 2016(HRSC2016)dataset L3 task and self-built fine-grained ships in aerial images dataset(FGSAID)are used to evaluate the model.HRSC2016 dataset contains 1 061 images with 2 886 ships divided into 19 types.FGSAID dataset contains 1 690 images with 5 410 ships divided into 45 types.The average precision(AP)is used as an evaluation metric, and the intersection over union is set as 0.5 to determine whether the prediction box matches the ground truth.On the HRSC2016 dataset L3 task, the proposed method achieves an AP of 77.3%, and MLFFRN can improve the AP by 6.3%.On the FGSAID dataset, our method achieves an AP of 71.5%.A series of ablation experiments is conducted on the HRSC2016 dataset L3 task to show the effectiveness of different parts of the proposed method.In addition, the proposed method is compared with the state-of-the-art deep-learning-based ship detection framework on two datasets.The experiment results show that our model outperforms all other methods on both datasets.Compared with single-shot alignment network(S2ANet)network, the proposed method increases the AP by 7.8% and 8.9% on HRSC2016 and FGSAID, respectively.In particular, the AP of the proposed method increases by 16.7%, 11.1%, and 1.1% for the aircraft carrier/amphibious assault ships, other warships, and merchant ships, respectively, in the FGSAID dataset.Conclusion In this study, the end-to-end fine-grained ship detection and recognition network KSFFN is proposed.It extracts the overall features and sub-region features of the proposals and fuses them according to the discriminant significance.The proposed method combines detection and fine-grained recognition into one framework, thereby improving the processing speed greatly while performing excellently.Thus, KSFFN has great application value.The proposed method has a more powerful classification framework and can achieve more accurate results than the existing detection method.The experiment results show that our method outperforms several state-of-the-art deep-learning-based ship detection frameworks, thereby proving the effectiveness of KSFFN.\",\"PeriodicalId\":36336,\"journal\":{\"name\":\"中国图象图形学报\",\"volume\":\"74 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"中国图象图形学报\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.11834/jig.220671\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"中国图象图形学报","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.11834/jig.220671","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0

摘要

目的 遥感图像中的舰船目标细粒度检测与识别在港口海域监视以及情报搜集等应用中有很高的实际应用价值,但遥感图像中不同种类的舰船目标整体颜色、形状与纹理特征相近,分辨力不足,导致舰船细粒度识别困难。针对该问题,提出了一种端到端的基于关键子区域特征的舰船细粒度检测与识别方法。方法 为了获得更适于目标细粒度识别的特征,提出多层次特征融合识别网络,按照整体、局部子区域两个层次从检测网络得到的候选目标区域中提取特征。然后结合候选目标中所有子区域的信息计算每个子区域的判别性显著度,对含有判别性组件的关键子区域进行挖掘。最后基于判别性显著度将子区域特征与整体特征进行自适应融合,形成表征能力更强的特征,对舰船目标进行细粒度识别。整个检测与识别网络采用端到端一体化设计,所有候选目标特征提取过程只需要经过一次骨干网络的计算,提高了计算效率。结果 在公开的带有细粒度类别标签的 HRSC2016(high resolu-tion ship collection)数据集 L3 任务上,本文方法平均准确率为 77.3%,相较于不采用多层次特征融合识别网络提升了 6.3%;在自建的包含 45 类舰船目标的 FGSAID(fine-grained ships in aerial images dataset)数据集上,本文方法平均准确率为 71.5%。结论 本文方法有效挖掘并融合了含有判别性组件的子区域的特征,解决了目标整体特征分辨力不足导致的细粒度目标识别困难问题,相较于现有的遥感图像舰船目标检测与识别算法准确性有明显提升。;Objective The ocean has great economic and military value.The development of human society increases the impact of ocean activities on the development of a country.The sea is an important carrier of marine activities.Thus, the recognition and monitoring of ship targets in key sea areas through remote sensing images are crucial to the national defense and development of the economy.Fine-grained ship detection and recognition in high-resolution remote sensing images refer to the identification of specific types of ships based on ship detection.A precise and detailed classification is valuable in practical application fields, such as sea surveillance and intelligence gathering.Instead of coarse-grained classification categories, such as warcraft and merchant ships, specific ship types, such as Arleigh Burke-class destroyer, Nimitz-class aircraft carrier, container, and car carrier, are necessary.However, the overall color, shape, and texture of different types of ship targets are similar.The structures of ships belong to different types, but their uses are similar.Moreover, the coating color of military ships is monotonous.These characteristics complicate the classification of these targets.The existing ship detectors are designed to focus on locating targets.The design of the classification branch of these detectors is relatively simple.They only use the features of whole targets for classification, significantly decreasing the performance in the fine-grained labeled datasets.The existing ship classification methods, which mainly classify targets on the pre-cropped image patches, are separated from the detection process.This approach is unsatisfactory for practical applications for two reasons:1)the whole backbone of these methods based on neural networks must be executed on every proposal to extract features.The remote sensing images of the harbor usually include several ships;thus, the computation cost increases sharply.2)The detection and classification networks are optimized separately, and the parameters of both networks are optimized to the best.The whole process cannot obtain the optimal solution because the locations of proposals obtained by detection methods vary with the pre-cropped image patches.utilize prior knowledge of ships and propose the key sub-region feature fusion network(KSFFN), which fuses features of sub-regions that are discriminative to the whole feature and combines detection and fine-grained recognition into one framework.Method KSFFN uses ResNet-50 as the backbone network to extract features and construct a proposal locating network by combining Faster R-CNN with region of interest(ROI) Transformer for obtaining proposal locations.Then, all of the proposals are ranked according to the probability of targets.The proposals with low probability are filtered.Then, the multi-level feature fusion recognition network(MLFFRN)is proposed to extract features from the proposals generated by the proposal locating network and to classify the proposals.First, the proposals are separated into several subregions along the axis and the overall features and sub-region features are extracted from different levels of the feature pyramid.Then, the self-supervision mechanism in the navigator-teacherscrutinizer network(NTS-Net)finds the key subregion that may contain important parts contributing to fine-grained recognition.Due to the limitation of image quality and characteristics of the target, not all targets have a very discriminating subregion.Moreover, the self-supervision mechanism in NTS-Net cannot reflect this subregion.Therefore, the information from all subregions in the proposal is utilized to calculate the discriminant significance of the subregion, which reflects the influence of the subregion on target recognition.Based on the discriminant significance, the weight of the sub-region is calculated, and the key sub-region features are fused with the overall features according to the weight.The combined feature is used to obtain the final classification result, thereby improving the accuracy of fine-grained recognition of ship targets.Result Public high resolution ship collection 2016(HRSC2016)dataset L3 task and self-built fine-grained ships in aerial images dataset(FGSAID)are used to evaluate the model. HRSC2016数据集包含1 061幅图像,其中2 886艘船舶分为19种类型。FGSAID数据集包含1 690张图像,其中5 410艘船分为45种类型。以平均精度(AP)作为评价指标,并将交集设置为0.5,以确定预测框是否与地面真实相匹配。在HRSC2016数据集L3任务上,本文方法的AP达到77.3%,MLFFRN可将AP提高6.3%。在FGSAID数据集上,我们的方法实现了71.5%的AP。在HRSC2016数据集L3任务上进行了一系列烧蚀实验,以验证所提方法不同部分的有效性。此外,在两个数据集上,将该方法与基于深度学习的船舶检测框架进行了比较。实验结果表明,我们的模型在这两个数据集上都优于所有其他方法。与单次对准网络(S2ANet)相比,该方法在HRSC2016和FGSAID上分别提高了7.8%和8.9%的AP。特别是,在FGSAID数据集中,对于航空母舰/两栖攻击舰、其他军舰和商船,所提出方法的AP分别增加了16.7%、11.1%和1.1%。本研究提出了端到端细粒度船舶检测识别网络KSFFN。提取提案的总体特征和子区域特征,并根据区分意义将其融合。该方法将检测和细粒度识别结合到一个框架中,在性能优异的同时大大提高了处理速度。因此,KSFFN具有很大的应用价值。与现有的检测方法相比,该方法具有更强大的分类框架,可以获得更准确的结果。实验结果表明,我们的方法优于几种最先进的基于深度学习的船舶检测框架,从而证明了KSFFN的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Key sub-region feature fusion network for fine-grained ship detection and recognition in remote sensing images
目的 遥感图像中的舰船目标细粒度检测与识别在港口海域监视以及情报搜集等应用中有很高的实际应用价值,但遥感图像中不同种类的舰船目标整体颜色、形状与纹理特征相近,分辨力不足,导致舰船细粒度识别困难。针对该问题,提出了一种端到端的基于关键子区域特征的舰船细粒度检测与识别方法。方法 为了获得更适于目标细粒度识别的特征,提出多层次特征融合识别网络,按照整体、局部子区域两个层次从检测网络得到的候选目标区域中提取特征。然后结合候选目标中所有子区域的信息计算每个子区域的判别性显著度,对含有判别性组件的关键子区域进行挖掘。最后基于判别性显著度将子区域特征与整体特征进行自适应融合,形成表征能力更强的特征,对舰船目标进行细粒度识别。整个检测与识别网络采用端到端一体化设计,所有候选目标特征提取过程只需要经过一次骨干网络的计算,提高了计算效率。结果 在公开的带有细粒度类别标签的 HRSC2016(high resolu-tion ship collection)数据集 L3 任务上,本文方法平均准确率为 77.3%,相较于不采用多层次特征融合识别网络提升了 6.3%;在自建的包含 45 类舰船目标的 FGSAID(fine-grained ships in aerial images dataset)数据集上,本文方法平均准确率为 71.5%。结论 本文方法有效挖掘并融合了含有判别性组件的子区域的特征,解决了目标整体特征分辨力不足导致的细粒度目标识别困难问题,相较于现有的遥感图像舰船目标检测与识别算法准确性有明显提升。;Objective The ocean has great economic and military value.The development of human society increases the impact of ocean activities on the development of a country.The sea is an important carrier of marine activities.Thus, the recognition and monitoring of ship targets in key sea areas through remote sensing images are crucial to the national defense and development of the economy.Fine-grained ship detection and recognition in high-resolution remote sensing images refer to the identification of specific types of ships based on ship detection.A precise and detailed classification is valuable in practical application fields, such as sea surveillance and intelligence gathering.Instead of coarse-grained classification categories, such as warcraft and merchant ships, specific ship types, such as Arleigh Burke-class destroyer, Nimitz-class aircraft carrier, container, and car carrier, are necessary.However, the overall color, shape, and texture of different types of ship targets are similar.The structures of ships belong to different types, but their uses are similar.Moreover, the coating color of military ships is monotonous.These characteristics complicate the classification of these targets.The existing ship detectors are designed to focus on locating targets.The design of the classification branch of these detectors is relatively simple.They only use the features of whole targets for classification, significantly decreasing the performance in the fine-grained labeled datasets.The existing ship classification methods, which mainly classify targets on the pre-cropped image patches, are separated from the detection process.This approach is unsatisfactory for practical applications for two reasons:1)the whole backbone of these methods based on neural networks must be executed on every proposal to extract features.The remote sensing images of the harbor usually include several ships;thus, the computation cost increases sharply.2)The detection and classification networks are optimized separately, and the parameters of both networks are optimized to the best.The whole process cannot obtain the optimal solution because the locations of proposals obtained by detection methods vary with the pre-cropped image patches.utilize prior knowledge of ships and propose the key sub-region feature fusion network(KSFFN), which fuses features of sub-regions that are discriminative to the whole feature and combines detection and fine-grained recognition into one framework.Method KSFFN uses ResNet-50 as the backbone network to extract features and construct a proposal locating network by combining Faster R-CNN with region of interest(ROI) Transformer for obtaining proposal locations.Then, all of the proposals are ranked according to the probability of targets.The proposals with low probability are filtered.Then, the multi-level feature fusion recognition network(MLFFRN)is proposed to extract features from the proposals generated by the proposal locating network and to classify the proposals.First, the proposals are separated into several subregions along the axis and the overall features and sub-region features are extracted from different levels of the feature pyramid.Then, the self-supervision mechanism in the navigator-teacherscrutinizer network(NTS-Net)finds the key subregion that may contain important parts contributing to fine-grained recognition.Due to the limitation of image quality and characteristics of the target, not all targets have a very discriminating subregion.Moreover, the self-supervision mechanism in NTS-Net cannot reflect this subregion.Therefore, the information from all subregions in the proposal is utilized to calculate the discriminant significance of the subregion, which reflects the influence of the subregion on target recognition.Based on the discriminant significance, the weight of the sub-region is calculated, and the key sub-region features are fused with the overall features according to the weight.The combined feature is used to obtain the final classification result, thereby improving the accuracy of fine-grained recognition of ship targets.Result Public high resolution ship collection 2016(HRSC2016)dataset L3 task and self-built fine-grained ships in aerial images dataset(FGSAID)are used to evaluate the model.HRSC2016 dataset contains 1 061 images with 2 886 ships divided into 19 types.FGSAID dataset contains 1 690 images with 5 410 ships divided into 45 types.The average precision(AP)is used as an evaluation metric, and the intersection over union is set as 0.5 to determine whether the prediction box matches the ground truth.On the HRSC2016 dataset L3 task, the proposed method achieves an AP of 77.3%, and MLFFRN can improve the AP by 6.3%.On the FGSAID dataset, our method achieves an AP of 71.5%.A series of ablation experiments is conducted on the HRSC2016 dataset L3 task to show the effectiveness of different parts of the proposed method.In addition, the proposed method is compared with the state-of-the-art deep-learning-based ship detection framework on two datasets.The experiment results show that our model outperforms all other methods on both datasets.Compared with single-shot alignment network(S2ANet)network, the proposed method increases the AP by 7.8% and 8.9% on HRSC2016 and FGSAID, respectively.In particular, the AP of the proposed method increases by 16.7%, 11.1%, and 1.1% for the aircraft carrier/amphibious assault ships, other warships, and merchant ships, respectively, in the FGSAID dataset.Conclusion In this study, the end-to-end fine-grained ship detection and recognition network KSFFN is proposed.It extracts the overall features and sub-region features of the proposals and fuses them according to the discriminant significance.The proposed method combines detection and fine-grained recognition into one framework, thereby improving the processing speed greatly while performing excellently.Thus, KSFFN has great application value.The proposed method has a more powerful classification framework and can achieve more accurate results than the existing detection method.The experiment results show that our method outperforms several state-of-the-art deep-learning-based ship detection frameworks, thereby proving the effectiveness of KSFFN.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
中国图象图形学报
中国图象图形学报 Computer Science-Computer Graphics and Computer-Aided Design
CiteScore
1.20
自引率
0.00%
发文量
6776
期刊介绍: Journal of Image and Graphics (ISSN 1006-8961, CN 11-3758/TB, CODEN ZTTXFZ) is an authoritative academic journal supervised by the Chinese Academy of Sciences and co-sponsored by the Institute of Space and Astronautical Information Innovation of the Chinese Academy of Sciences (ISIAS), the Chinese Society of Image and Graphics (CSIG), and the Beijing Institute of Applied Physics and Computational Mathematics (BIAPM). The journal integrates high-tech theories, technical methods and industrialisation of applied research results in computer image graphics, and mainly publishes innovative and high-level scientific research papers on basic and applied research in image graphics science and its closely related fields. The form of papers includes reviews, technical reports, project progress, academic news, new technology reviews, new product introduction and industrialisation research. The content covers a wide range of fields such as image analysis and recognition, image understanding and computer vision, computer graphics, virtual reality and augmented reality, system simulation, animation, etc., and theme columns are opened according to the research hotspots and cutting-edge topics. Journal of Image and Graphics reaches a wide range of readers, including scientific and technical personnel, enterprise supervisors, and postgraduates and college students of colleges and universities engaged in the fields of national defence, military, aviation, aerospace, communications, electronics, automotive, agriculture, meteorology, environmental protection, remote sensing, mapping, oil field, construction, transportation, finance, telecommunications, education, medical care, film and television, and art. Journal of Image and Graphics is included in many important domestic and international scientific literature database systems, including EBSCO database in the United States, JST database in Japan, Scopus database in the Netherlands, China Science and Technology Thesis Statistics and Analysis (Annual Research Report), China Science Citation Database (CSCD), China Academic Journal Network Publishing Database (CAJD), and China Academic Journal Network Publishing Database (CAJD). China Science Citation Database (CSCD), China Academic Journals Network Publishing Database (CAJD), China Academic Journal Abstracts, Chinese Science Abstracts (Series A), China Electronic Science Abstracts, Chinese Core Journals Abstracts, Chinese Academic Journals on CD-ROM, and China Academic Journals Comprehensive Evaluation Database.
期刊最新文献
Roselle Pest Detection and Classification Using Threshold and Template Matching Human Action Recognition with Skeleton and Infrared Fusion Model Melanoma Detection Based on SVM Using MATLAB Evaluation of SSD Architecture for Small Size Object Detection: A Case Study on UAV Oil Pipeline MonitoringEvaluation of SSD Architecture for Small Size Object Detection: A Case Study on UAV Oil Pipeline Monitoring Improving Brain Tumor Classification Efficacy through the Application of Feature Selection and Ensemble Classifiers
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1