A Hierarchical Local-Global-Aware Transformer With Scratch Learning Capabilities for Change Detection

Ming Chen;Wanshou Jiang
{"title":"A Hierarchical Local-Global-Aware Transformer With Scratch Learning Capabilities for Change Detection","authors":"Ming Chen;Wanshou Jiang","doi":"10.1109/LGRS.2024.3505253","DOIUrl":null,"url":null,"abstract":"Most transformer-based methods rely on pretraining weights on large datasets such as Imagenet or pretraining from specific change detection (CD) datasets and then fine-tuning on the target dataset. When the target dataset significantly diverges from the dataset used for pretraining, the model’s ability to generalize to remote sensing imagery may be compromised due to the domain gap. In this letter, we propose HierFormer, which has the advantage of processing semantic features hierarchically, using simple operations for shallow features, spatial position transformation for middle-level features, and channel information interaction for high-level features. In addition, we propose a local-global-aware (LGA) attention block, which reduces the computational overhead of self-attention by sparse attention and increases the locality inductive bias (LIB) of the transformer by focusing attention on the local region and sparse part of the global region, which enables the model to be trained from scratch on small to medium-sized CD datasets. Finally, a new feature fusion decoder (FFD) is proposed to fuse the bitemporal features, which reweights the features of each channel through attention mechanism. Compared with other transformer-based or transformer-CNN-based hybrid networks, our method significantly improves F1, reaching 91.56% and 97.56% on the LEVIR-CD and CDD-CD change detection datasets. Our code is available at \n<uri>https://github.com/WesternTrail/HierFormer</uri>\n.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4000,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10766654/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Most transformer-based methods rely on pretraining weights on large datasets such as Imagenet or pretraining from specific change detection (CD) datasets and then fine-tuning on the target dataset. When the target dataset significantly diverges from the dataset used for pretraining, the model’s ability to generalize to remote sensing imagery may be compromised due to the domain gap. In this letter, we propose HierFormer, which has the advantage of processing semantic features hierarchically, using simple operations for shallow features, spatial position transformation for middle-level features, and channel information interaction for high-level features. In addition, we propose a local-global-aware (LGA) attention block, which reduces the computational overhead of self-attention by sparse attention and increases the locality inductive bias (LIB) of the transformer by focusing attention on the local region and sparse part of the global region, which enables the model to be trained from scratch on small to medium-sized CD datasets. Finally, a new feature fusion decoder (FFD) is proposed to fuse the bitemporal features, which reweights the features of each channel through attention mechanism. Compared with other transformer-based or transformer-CNN-based hybrid networks, our method significantly improves F1, reaching 91.56% and 97.56% on the LEVIR-CD and CDD-CD change detection datasets. Our code is available at https://github.com/WesternTrail/HierFormer .
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一种分级局部全局感知变压器,具有Scratch学习能力,用于变化检测
大多数基于变压器的方法依赖于对大型数据集(如Imagenet)的预训练权重,或者来自特定变化检测(CD)数据集的预训练,然后对目标数据集进行微调。当目标数据集与用于预训练的数据集明显偏离时,模型泛化到遥感图像的能力可能会因域间隙而受到影响。在这封信中,我们提出了HierFormer,它具有分层处理语义特征的优点,对浅层特征使用简单的操作,对中层特征使用空间位置变换,对高层特征使用通道信息交互。此外,我们提出了一个局部-全局感知(LGA)注意块,通过稀疏注意减少了自注意的计算开销,并通过将注意力集中在局部区域和全局区域的稀疏部分来增加变压器的局部归纳偏置(LIB),从而使模型能够在中小型CD数据集上从头开始训练。最后,提出了一种新的特征融合解码器(FFD),该解码器通过注意机制对每个信道的特征进行加权,从而实现双时特征的融合。与其他基于变压器或基于变压器- cnn的混合网络相比,我们的方法显著提高了F1,在levirr - cd和CDD-CD变化检测数据集上分别达到91.56%和97.56%。我们的代码可在https://github.com/WesternTrail/HierFormer上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Multiclass Training Dataset and Hybrid Neural Network for Simultaneous Karst and Channel Detection An Improved Ground-Based GNSS-R Soil Moisture Retrieval Algorithm Incorporating Precipitation Effects MCD-YOLO: An Improved YOLOv11 Framework for Manhole Cover Detection in UAV Imagery Robust Recognition of Anomalous Distribution From Electrical Resistivity Tomography Dip-Guided Poststack Inversion via Structure-Tensor Regularization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1