Visual-Semantic Cooperative Learning for Few-Shot SAR Target Classification

IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing Pub Date : 2025-01-16 DOI:10.1109/JSTARS.2025.3530442
Siyuan Wang;Yinghua Wang;Xiaoting Zhang;Chen Zhang;Hongwei Liu
{"title":"Visual-Semantic Cooperative Learning for Few-Shot SAR Target Classification","authors":"Siyuan Wang;Yinghua Wang;Xiaoting Zhang;Chen Zhang;Hongwei Liu","doi":"10.1109/JSTARS.2025.3530442","DOIUrl":null,"url":null,"abstract":"Nowadays, meta-learning is the mainstream method for solving few-shot synthetic aperture radar (SAR) target classification, devoted to learning a lot of empirical knowledge from the source domain to quickly recognize the novel classes after seeing only a few samples. However, obtaining the source domain with sufficiently labeled SAR images is difficult, leading to limited transferable empirical knowledge from the source to the target domain. Moreover, most existing methods only rely on visual images to learn the targets' feature representations, resulting in poor feature discriminability in few-shot situations. To tackle the above problems, we propose a novel visual-semantic cooperative network (VSC-Net) that involves visual and semantic dual classification to compensate for the inaccuracy of visual classification through semantic classification. First, we design textual semantic descriptions of SAR targets to exploit rich semantic information. Then, the designed textual semantic descriptions are encoded by the text encoder of the pretrained large vision language model to obtain class semantic embeddings of targets. In the visual classification stage, we develop the semantic-based visual prototype calibration module to project the class semantic embeddings to the visual space to calibrate the visual prototypes, improving the reliability of the prototypes computed from a few support samples. Besides, semantic consistency loss is proposed to constrain the accuracy of the class semantic embeddings projected to the visual space. During the semantic classification stage, the visual features of query samples are mapped into the semantic space, and their classes are predicted via searching for the nearest class semantic embeddings. Furthermore, we introduce a visual indication loss to modify the semantic classification using the calibrated visual prototypes. Ultimately, query samples' classes are decided by merging the visual and semantic classification results. We conduct adequate experiments on the SAR target dataset, which validate VSC-Net's few-shot classification efficacy.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"6532-6550"},"PeriodicalIF":4.7000,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10843851","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10843851/","RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Nowadays, meta-learning is the mainstream method for solving few-shot synthetic aperture radar (SAR) target classification, devoted to learning a lot of empirical knowledge from the source domain to quickly recognize the novel classes after seeing only a few samples. However, obtaining the source domain with sufficiently labeled SAR images is difficult, leading to limited transferable empirical knowledge from the source to the target domain. Moreover, most existing methods only rely on visual images to learn the targets' feature representations, resulting in poor feature discriminability in few-shot situations. To tackle the above problems, we propose a novel visual-semantic cooperative network (VSC-Net) that involves visual and semantic dual classification to compensate for the inaccuracy of visual classification through semantic classification. First, we design textual semantic descriptions of SAR targets to exploit rich semantic information. Then, the designed textual semantic descriptions are encoded by the text encoder of the pretrained large vision language model to obtain class semantic embeddings of targets. In the visual classification stage, we develop the semantic-based visual prototype calibration module to project the class semantic embeddings to the visual space to calibrate the visual prototypes, improving the reliability of the prototypes computed from a few support samples. Besides, semantic consistency loss is proposed to constrain the accuracy of the class semantic embeddings projected to the visual space. During the semantic classification stage, the visual features of query samples are mapped into the semantic space, and their classes are predicted via searching for the nearest class semantic embeddings. Furthermore, we introduce a visual indication loss to modify the semantic classification using the calibrated visual prototypes. Ultimately, query samples' classes are decided by merging the visual and semantic classification results. We conduct adequate experiments on the SAR target dataset, which validate VSC-Net's few-shot classification efficacy.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
9.30
自引率
10.90%
发文量
563
审稿时长
4.7 months
期刊介绍: The IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing addresses the growing field of applications in Earth observations and remote sensing, and also provides a venue for the rapidly expanding special issues that are being sponsored by the IEEE Geosciences and Remote Sensing Society. The journal draws upon the experience of the highly successful “IEEE Transactions on Geoscience and Remote Sensing” and provide a complementary medium for the wide range of topics in applied earth observations. The ‘Applications’ areas encompasses the societal benefit areas of the Global Earth Observations Systems of Systems (GEOSS) program. Through deliberations over two years, ministers from 50 countries agreed to identify nine areas where Earth observation could positively impact the quality of life and health of their respective countries. Some of these are areas not traditionally addressed in the IEEE context. These include biodiversity, health and climate. Yet it is the skill sets of IEEE members, in areas such as observations, communications, computers, signal processing, standards and ocean engineering, that form the technical underpinnings of GEOSS. Thus, the Journal attracts a broad range of interests that serves both present members in new ways and expands the IEEE visibility into new areas.
期刊最新文献
Deep Learning-Based Interpolation for Ground Penetrating Radar Data Reconstruction Enabling Advanced Land Cover Analytics: An Integrated Data Extraction Pipeline for Predictive Modeling With the Dynamic World Dataset A Kriging Interpolation-Enhanced MART for Nonuniform Observational Data in Geosynchronous SAR-Based Computerized Ionospheric Tomography A New Fast Sparse Unmixing Algorithm Based on Adaptive Spectral Library Pruning and Nesterov Optimization An Improved Man-Made Structure Detection Method for Multi-aspect Polarimetric SAR Data
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1