有效利用差异特征:探索时间序列遥感图像中变化区域和变化时刻的多任务网络

IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-10-01 DOI:10.1016/j.isprsjprs.2024.09.029
Jialu Li, Chen Wu
{"title":"有效利用差异特征:探索时间序列遥感图像中变化区域和变化时刻的多任务网络","authors":"Jialu Li,&nbsp;Chen Wu","doi":"10.1016/j.isprsjprs.2024.09.029","DOIUrl":null,"url":null,"abstract":"<div><div>With the rapid advancement in remote sensing Earth observation technology, an abundance of Time Series multispectral remote sensing Images (TSIs) from platforms like Landsat and Sentinel-2 are now accessible, offering essential data support for Time Series remote sensing images Change Detection (TSCD). However, TSCD faces misalignment challenges due to variations in radiation incidence angles, satellite orbit deviations, and other factors when capturing TSIs at the same geographic location but different times. Furthermore, another important issue that needs immediate attention is the precise determination of change moments for change areas within TSIs. To tackle these challenges, this paper proposes Multi-RLD-Net, a multi-task network that efficiently utilizes difference features to explore change areas and corresponding change moments in TSIs. To the best of our knowledge, this is the first time that using deep learning for identifying change moments in TSIs. Multi-RLD-Net integrates Optical Flow with Long Short-Term Memory (LSTM) to derive differences between TSIs. Initially, a lightweight encoder is introduced to extract multi-scale spatial features, which maximally preserve original features through a siamese structure. Subsequently, shallow spatial features extracted by the encoder are input into the novel Recursive Optical Flow Difference (ROD) module to align input features and detect differences between them, while deep spatial features extracted by the encoder are input into LSTM to capture long-term temporal dependencies and differences between hidden states. Both branches output differences among TSIs, enhancing the expressive capacity of the model. Finally, the decoder identifies change areas and their corresponding change moments using multi-task branches. Experiments on UTRNet dataset and DynamicEarthNet dataset demonstrate that proposed RLD-Net and Multi-RLD-Net outperform representative approaches, achieving F1 value improvements of 1.29% and 10.42% compared to the state-of-the art method MC<sup>2</sup>ABNet. The source code will be available soon at <span><span>https://github.com/lijialu144/Multi-RLD-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 487-505"},"PeriodicalIF":10.6000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Using difference features effectively: A multi-task network for exploring change areas and change moments in time series remote sensing images\",\"authors\":\"Jialu Li,&nbsp;Chen Wu\",\"doi\":\"10.1016/j.isprsjprs.2024.09.029\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>With the rapid advancement in remote sensing Earth observation technology, an abundance of Time Series multispectral remote sensing Images (TSIs) from platforms like Landsat and Sentinel-2 are now accessible, offering essential data support for Time Series remote sensing images Change Detection (TSCD). However, TSCD faces misalignment challenges due to variations in radiation incidence angles, satellite orbit deviations, and other factors when capturing TSIs at the same geographic location but different times. Furthermore, another important issue that needs immediate attention is the precise determination of change moments for change areas within TSIs. To tackle these challenges, this paper proposes Multi-RLD-Net, a multi-task network that efficiently utilizes difference features to explore change areas and corresponding change moments in TSIs. To the best of our knowledge, this is the first time that using deep learning for identifying change moments in TSIs. Multi-RLD-Net integrates Optical Flow with Long Short-Term Memory (LSTM) to derive differences between TSIs. Initially, a lightweight encoder is introduced to extract multi-scale spatial features, which maximally preserve original features through a siamese structure. Subsequently, shallow spatial features extracted by the encoder are input into the novel Recursive Optical Flow Difference (ROD) module to align input features and detect differences between them, while deep spatial features extracted by the encoder are input into LSTM to capture long-term temporal dependencies and differences between hidden states. Both branches output differences among TSIs, enhancing the expressive capacity of the model. Finally, the decoder identifies change areas and their corresponding change moments using multi-task branches. Experiments on UTRNet dataset and DynamicEarthNet dataset demonstrate that proposed RLD-Net and Multi-RLD-Net outperform representative approaches, achieving F1 value improvements of 1.29% and 10.42% compared to the state-of-the art method MC<sup>2</sup>ABNet. The source code will be available soon at <span><span>https://github.com/lijialu144/Multi-RLD-Net</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50269,\"journal\":{\"name\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"volume\":\"218 \",\"pages\":\"Pages 487-505\"},\"PeriodicalIF\":10.6000,\"publicationDate\":\"2024-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0924271624003678\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"GEOGRAPHY, PHYSICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0924271624003678","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 0

摘要

随着遥感地球观测技术的快速发展,现在可以从大地遥感卫星和哨兵-2 等平台获取大量时间序列多光谱遥感图像(TSIs),为时间序列遥感图像变化探测(TSCD)提供重要的数据支持。然而,由于辐射入射角的变化、卫星轨道偏差和其他因素,在捕捉同一地理位置但不同时间的 TSIs 时,TSCD 面临着不对齐的挑战。此外,另一个亟需关注的重要问题是如何精确确定 TSI 中变化区域的变化时刻。为了应对这些挑战,本文提出了多任务网络 Multi-RLD-Net,它能有效利用差异特征来探索 TSI 中的变化区域和相应的变化时刻。据我们所知,这是首次利用深度学习识别 TSI 中的变化时刻。Multi-RLD-Net 将光流与长短时记忆(LSTM)相结合,以推导出 TSI 之间的差异。首先,引入轻量级编码器来提取多尺度空间特征,通过连体结构最大限度地保留原始特征。随后,编码器提取的浅层空间特征被输入到新颖的递归光流差分(ROD)模块中,以对齐输入特征并检测它们之间的差异;编码器提取的深层空间特征被输入到 LSTM 中,以捕捉隐藏状态之间的长期时间依赖性和差异。这两个分支都会输出 TSI 之间的差异,从而增强模型的表达能力。最后,解码器使用多任务分支识别变化区域及其相应的变化时刻。在 UTRNet 数据集和 DynamicEarthNet 数据集上的实验表明,所提出的 RLD-Net 和 Multi-RLD-Net 优于代表性方法,与最先进的方法 MC2ABNet 相比,F1 值分别提高了 1.29% 和 10.42%。源代码即将在 https://github.com/lijialu144/Multi-RLD-Net 上公布。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Using difference features effectively: A multi-task network for exploring change areas and change moments in time series remote sensing images
With the rapid advancement in remote sensing Earth observation technology, an abundance of Time Series multispectral remote sensing Images (TSIs) from platforms like Landsat and Sentinel-2 are now accessible, offering essential data support for Time Series remote sensing images Change Detection (TSCD). However, TSCD faces misalignment challenges due to variations in radiation incidence angles, satellite orbit deviations, and other factors when capturing TSIs at the same geographic location but different times. Furthermore, another important issue that needs immediate attention is the precise determination of change moments for change areas within TSIs. To tackle these challenges, this paper proposes Multi-RLD-Net, a multi-task network that efficiently utilizes difference features to explore change areas and corresponding change moments in TSIs. To the best of our knowledge, this is the first time that using deep learning for identifying change moments in TSIs. Multi-RLD-Net integrates Optical Flow with Long Short-Term Memory (LSTM) to derive differences between TSIs. Initially, a lightweight encoder is introduced to extract multi-scale spatial features, which maximally preserve original features through a siamese structure. Subsequently, shallow spatial features extracted by the encoder are input into the novel Recursive Optical Flow Difference (ROD) module to align input features and detect differences between them, while deep spatial features extracted by the encoder are input into LSTM to capture long-term temporal dependencies and differences between hidden states. Both branches output differences among TSIs, enhancing the expressive capacity of the model. Finally, the decoder identifies change areas and their corresponding change moments using multi-task branches. Experiments on UTRNet dataset and DynamicEarthNet dataset demonstrate that proposed RLD-Net and Multi-RLD-Net outperform representative approaches, achieving F1 value improvements of 1.29% and 10.42% compared to the state-of-the art method MC2ABNet. The source code will be available soon at https://github.com/lijialu144/Multi-RLD-Net.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ISPRS Journal of Photogrammetry and Remote Sensing
ISPRS Journal of Photogrammetry and Remote Sensing 工程技术-成像科学与照相技术
CiteScore
21.00
自引率
6.30%
发文量
273
审稿时长
40 days
期刊介绍: The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive. P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields. In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.
期刊最新文献
ACMatch: Improving context capture for two-view correspondence learning via adaptive convolution MIWC: A multi-temporal image weighted composition method for satellite-derived bathymetry in shallow waters A universal adapter in segmentation models for transferable landslide mapping Contrastive learning for real SAR image despeckling B3-CDG: A pseudo-sample diffusion generator for bi-temporal building binary change detection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1