FGRMNet: Fully graph relational matching network for few-shot remote sensing scene classification

IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Expert Systems with Applications Pub Date : 2025-05-15 Epub Date: 2025-02-17 DOI:10.1016/j.eswa.2025.126823
Jacob Regan, Mahdi Khodayar
{"title":"FGRMNet: Fully graph relational matching network for few-shot remote sensing scene classification","authors":"Jacob Regan,&nbsp;Mahdi Khodayar","doi":"10.1016/j.eswa.2025.126823","DOIUrl":null,"url":null,"abstract":"<div><div>Few-shot remote sensing scene classification (FS-RSSC) is an essential task within remote sensing (RS) and aims to develop models that can quickly and accurately adapt to new aerial scene categories provided only a few labeled examples of the novel scenes. Convolutional neural network (CNN)-based methods have demonstrated decent performance for remote sensing scene classification (RSSC) and FS-RSSC, but they cannot handle irregular patterns well. Vision Transformer (ViT) does not suffer from this drawback, but its large data dependency makes it less viable for few-shot learning. To alleviate these weaknesses, we propose a novel end-to-end, fully graph-based framework for FS-RSSC called the fully graph relational matching network (FGRMNet). This framework consists of three principle components: (1) a deep graph neural network (GNN) embedding network comprised of dynamic GCN layers to extract long-range and irregular patterns from aerial scene samples. Unlike CNN, our GNN has a dynamic receptive field allowing it to extract richer, relational connections from object features. (2) A graph contrastive matching module (GCM) consisting of a local–global and global-global contrastive learning objective to improve the robustness and generalization of the embedding network for graph similarity learning by improving how the GNN encoder adapts its receptive field between latent layers. (3) A graph relational attention (GRAT) module, which consists of a graph attention network that learns to measure the similarity between the global graph representations of a query and the support samples by incorporating high-level node information with global graph context in the relational learning step. More precisely, the GRAT module improves the quality of the relational scores by assigning higher value to the parts of a query’s node embeddings most relevant to the comparison between the global representation of the query and the global representation of the support class. Extensive experimentation conducted for FGRMNet on three popular RS datasets demonstrates that our framework achieves state-of-the-art performance.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"274 ","pages":"Article 126823"},"PeriodicalIF":7.5000,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417425004452","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/17 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Few-shot remote sensing scene classification (FS-RSSC) is an essential task within remote sensing (RS) and aims to develop models that can quickly and accurately adapt to new aerial scene categories provided only a few labeled examples of the novel scenes. Convolutional neural network (CNN)-based methods have demonstrated decent performance for remote sensing scene classification (RSSC) and FS-RSSC, but they cannot handle irregular patterns well. Vision Transformer (ViT) does not suffer from this drawback, but its large data dependency makes it less viable for few-shot learning. To alleviate these weaknesses, we propose a novel end-to-end, fully graph-based framework for FS-RSSC called the fully graph relational matching network (FGRMNet). This framework consists of three principle components: (1) a deep graph neural network (GNN) embedding network comprised of dynamic GCN layers to extract long-range and irregular patterns from aerial scene samples. Unlike CNN, our GNN has a dynamic receptive field allowing it to extract richer, relational connections from object features. (2) A graph contrastive matching module (GCM) consisting of a local–global and global-global contrastive learning objective to improve the robustness and generalization of the embedding network for graph similarity learning by improving how the GNN encoder adapts its receptive field between latent layers. (3) A graph relational attention (GRAT) module, which consists of a graph attention network that learns to measure the similarity between the global graph representations of a query and the support samples by incorporating high-level node information with global graph context in the relational learning step. More precisely, the GRAT module improves the quality of the relational scores by assigning higher value to the parts of a query’s node embeddings most relevant to the comparison between the global representation of the query and the global representation of the support class. Extensive experimentation conducted for FGRMNet on three popular RS datasets demonstrates that our framework achieves state-of-the-art performance.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
FGRMNet:面向少拍遥感场景分类的全图关系匹配网络
少拍遥感场景分类(FS-RSSC)是遥感领域的一项重要任务,其目的是建立能够快速、准确地适应新的航拍场景类别的模型。基于卷积神经网络(CNN)的方法在遥感场景分类(RSSC)和FS-RSSC中表现出了良好的性能,但它们不能很好地处理不规则模式。视觉转换器(Vision Transformer, ViT)没有这个缺点,但它的大数据依赖性使它不太适合进行少镜头学习。为了缓解这些弱点,我们提出了一种新颖的端到端、完全基于图的FS-RSSC框架,称为全图关系匹配网络(FGRMNet)。该框架由三个主要部分组成:(1)由动态GCN层组成的深度图神经网络(GNN)嵌入网络,用于从航拍场景样本中提取远距离和不规则模式。与CNN不同,我们的GNN有一个动态的接受场,允许它从对象特征中提取更丰富的关系连接。(2)构建由局部-全局和全局-全局对比学习目标组成的图对比匹配模块(GCM),通过改进GNN编码器对潜在层间感受域的适应性,提高嵌入网络图相似学习的鲁棒性和泛化性。(3)图关系注意(GRAT)模块,该模块由图注意网络组成,通过在关系学习步骤中结合全局图上下文的高级节点信息来学习度量查询的全局图表示与支持样本之间的相似性。更准确地说,GRAT模块通过为查询的节点嵌入中与查询的全局表示和支持类的全局表示之间的比较最相关的部分分配更高的值来提高关系分数的质量。在三个流行的RS数据集上对FGRMNet进行的大量实验表明,我们的框架达到了最先进的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Expert Systems with Applications
Expert Systems with Applications 工程技术-工程:电子与电气
CiteScore
13.80
自引率
10.60%
发文量
2045
审稿时长
8.7 months
期刊介绍: Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.
期刊最新文献
FairDiff: Masked condition diffusion for fairness-aware recommendation CTGAN-MNLIME: A CTGAN-boosted multidimensional nonlinear LIME method for corporate environmental indicators prediction An explainable machine learning-based scoring function using interpretable features and model explanation approaches for binding affinity prediction Hybrid fuzzy multi-criteria decision-making model for assessing sustainable waste management strategies MPGCF: Multi-objective and popularity-smoothing graph collaborative filtering for long-tail web API recommendation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1